00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1053 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3720 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.066 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.066 The recommended git tool is: git 00:00:00.066 using credential 00000000-0000-0000-0000-000000000002 00:00:00.068 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.101 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.140 Using shallow fetch with depth 1 00:00:00.140 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.140 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.200 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.200 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.872 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.882 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.893 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.893 > git config core.sparsecheckout # timeout=10 00:00:04.903 > git read-tree -mu HEAD # timeout=10 00:00:04.917 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.939 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.939 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.036 [Pipeline] Start of Pipeline 00:00:05.048 [Pipeline] library 00:00:05.050 Loading library shm_lib@master 00:00:05.050 Library shm_lib@master is cached. Copying from home. 00:00:05.062 [Pipeline] node 00:00:05.070 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:05.072 [Pipeline] { 00:00:05.081 [Pipeline] catchError 00:00:05.082 [Pipeline] { 00:00:05.093 [Pipeline] wrap 00:00:05.100 [Pipeline] { 00:00:05.107 [Pipeline] stage 00:00:05.108 [Pipeline] { (Prologue) 00:00:05.124 [Pipeline] echo 00:00:05.126 Node: VM-host-SM0 00:00:05.131 [Pipeline] cleanWs 00:00:05.140 [WS-CLEANUP] Deleting project workspace... 00:00:05.140 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.145 [WS-CLEANUP] done 00:00:05.348 [Pipeline] setCustomBuildProperty 00:00:05.542 [Pipeline] httpRequest 00:00:06.152 [Pipeline] echo 00:00:06.153 Sorcerer 10.211.164.20 is alive 00:00:06.164 [Pipeline] retry 00:00:06.166 [Pipeline] { 00:00:06.180 [Pipeline] httpRequest 00:00:06.184 HttpMethod: GET 00:00:06.184 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.185 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.198 Response Code: HTTP/1.1 200 OK 00:00:06.199 Success: Status code 200 is in the accepted range: 200,404 00:00:06.199 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.360 [Pipeline] } 00:00:10.377 [Pipeline] // retry 00:00:10.385 [Pipeline] sh 00:00:10.665 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.679 [Pipeline] httpRequest 00:00:11.046 [Pipeline] echo 00:00:11.047 Sorcerer 10.211.164.20 is alive 00:00:11.056 [Pipeline] retry 00:00:11.059 [Pipeline] { 00:00:11.072 [Pipeline] httpRequest 00:00:11.076 HttpMethod: GET 00:00:11.077 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.077 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.096 Response Code: HTTP/1.1 200 OK 00:00:11.097 Success: Status code 200 is in the accepted range: 200,404 00:00:11.097 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:23.595 [Pipeline] } 00:01:23.613 [Pipeline] // retry 00:01:23.621 [Pipeline] sh 00:01:23.904 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:26.454 [Pipeline] sh 00:01:26.736 + git -C spdk log --oneline -n5 00:01:26.736 c13c99a5e test: Various fixes for Fedora40 00:01:26.736 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:26.736 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:26.736 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:26.736 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:26.757 [Pipeline] withCredentials 00:01:26.768 > git --version # timeout=10 00:01:26.780 > git --version # 'git version 2.39.2' 00:01:26.801 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:26.803 [Pipeline] { 00:01:26.813 [Pipeline] retry 00:01:26.815 [Pipeline] { 00:01:26.831 [Pipeline] sh 00:01:27.114 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:27.126 [Pipeline] } 00:01:27.143 [Pipeline] // retry 00:01:27.149 [Pipeline] } 00:01:27.165 [Pipeline] // withCredentials 00:01:27.174 [Pipeline] httpRequest 00:01:27.648 [Pipeline] echo 00:01:27.650 Sorcerer 10.211.164.20 is alive 00:01:27.659 [Pipeline] retry 00:01:27.661 [Pipeline] { 00:01:27.675 [Pipeline] httpRequest 00:01:27.680 HttpMethod: GET 00:01:27.680 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:27.681 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:27.704 Response Code: HTTP/1.1 200 OK 00:01:27.704 Success: Status code 200 is in the accepted range: 200,404 00:01:27.705 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.725 [Pipeline] } 00:01:36.743 [Pipeline] // retry 00:01:36.751 [Pipeline] sh 00:01:37.030 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.418 [Pipeline] sh 00:01:38.699 + git -C dpdk log --oneline -n5 00:01:38.699 caf0f5d395 version: 22.11.4 00:01:38.699 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:38.699 dc9c799c7d vhost: fix missing spinlock unlock 00:01:38.699 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:38.699 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:38.715 [Pipeline] writeFile 00:01:38.730 [Pipeline] sh 00:01:39.012 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:39.023 [Pipeline] sh 00:01:39.302 + cat autorun-spdk.conf 00:01:39.302 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.302 SPDK_TEST_NVMF=1 00:01:39.302 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.302 SPDK_TEST_USDT=1 00:01:39.302 SPDK_RUN_UBSAN=1 00:01:39.302 SPDK_TEST_NVMF_MDNS=1 00:01:39.302 NET_TYPE=virt 00:01:39.302 SPDK_JSONRPC_GO_CLIENT=1 00:01:39.302 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:39.302 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:39.302 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.308 RUN_NIGHTLY=1 00:01:39.310 [Pipeline] } 00:01:39.323 [Pipeline] // stage 00:01:39.336 [Pipeline] stage 00:01:39.338 [Pipeline] { (Run VM) 00:01:39.351 [Pipeline] sh 00:01:39.629 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:39.629 + echo 'Start stage prepare_nvme.sh' 00:01:39.629 Start stage prepare_nvme.sh 00:01:39.629 + [[ -n 7 ]] 00:01:39.629 + disk_prefix=ex7 00:01:39.629 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:39.629 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:39.629 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:39.629 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.630 ++ SPDK_TEST_NVMF=1 00:01:39.630 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.630 ++ SPDK_TEST_USDT=1 00:01:39.630 ++ SPDK_RUN_UBSAN=1 00:01:39.630 ++ SPDK_TEST_NVMF_MDNS=1 00:01:39.630 ++ NET_TYPE=virt 00:01:39.630 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:39.630 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:39.630 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:39.630 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.630 ++ RUN_NIGHTLY=1 00:01:39.630 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:39.630 + nvme_files=() 00:01:39.630 + declare -A nvme_files 00:01:39.630 + backend_dir=/var/lib/libvirt/images/backends 00:01:39.630 + nvme_files['nvme.img']=5G 00:01:39.630 + nvme_files['nvme-cmb.img']=5G 00:01:39.630 + nvme_files['nvme-multi0.img']=4G 00:01:39.630 + nvme_files['nvme-multi1.img']=4G 00:01:39.630 + nvme_files['nvme-multi2.img']=4G 00:01:39.630 + nvme_files['nvme-openstack.img']=8G 00:01:39.630 + nvme_files['nvme-zns.img']=5G 00:01:39.630 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:39.630 + (( SPDK_TEST_FTL == 1 )) 00:01:39.630 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:39.630 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:39.630 + for nvme in "${!nvme_files[@]}" 00:01:39.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:39.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.630 + for nvme in "${!nvme_files[@]}" 00:01:39.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:39.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.630 + for nvme in "${!nvme_files[@]}" 00:01:39.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:39.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:39.630 + for nvme in "${!nvme_files[@]}" 00:01:39.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:39.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.630 + for nvme in "${!nvme_files[@]}" 00:01:39.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:39.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.630 + for nvme in "${!nvme_files[@]}" 00:01:39.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:39.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.630 + for nvme in "${!nvme_files[@]}" 00:01:39.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:39.889 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.889 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:39.889 + echo 'End stage prepare_nvme.sh' 00:01:39.889 End stage prepare_nvme.sh 00:01:39.901 [Pipeline] sh 00:01:40.209 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:40.209 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:40.209 00:01:40.209 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:40.209 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:40.209 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:40.209 HELP=0 00:01:40.209 DRY_RUN=0 00:01:40.209 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:40.209 NVME_DISKS_TYPE=nvme,nvme, 00:01:40.209 NVME_AUTO_CREATE=0 00:01:40.209 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:40.209 NVME_CMB=,, 00:01:40.209 NVME_PMR=,, 00:01:40.209 NVME_ZNS=,, 00:01:40.209 NVME_MS=,, 00:01:40.209 NVME_FDP=,, 00:01:40.209 SPDK_VAGRANT_DISTRO=fedora39 00:01:40.209 SPDK_VAGRANT_VMCPU=10 00:01:40.209 SPDK_VAGRANT_VMRAM=12288 00:01:40.209 SPDK_VAGRANT_PROVIDER=libvirt 00:01:40.209 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:40.209 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:40.209 SPDK_OPENSTACK_NETWORK=0 00:01:40.209 VAGRANT_PACKAGE_BOX=0 00:01:40.209 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:40.209 FORCE_DISTRO=true 00:01:40.209 VAGRANT_BOX_VERSION= 00:01:40.209 EXTRA_VAGRANTFILES= 00:01:40.209 NIC_MODEL=e1000 00:01:40.209 00:01:40.209 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:40.209 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:42.753 Bringing machine 'default' up with 'libvirt' provider... 00:01:43.320 ==> default: Creating image (snapshot of base box volume). 00:01:43.320 ==> default: Creating domain with the following settings... 00:01:43.320 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734093983_b7727d4c527c2ac2795c 00:01:43.320 ==> default: -- Domain type: kvm 00:01:43.320 ==> default: -- Cpus: 10 00:01:43.320 ==> default: -- Feature: acpi 00:01:43.320 ==> default: -- Feature: apic 00:01:43.320 ==> default: -- Feature: pae 00:01:43.320 ==> default: -- Memory: 12288M 00:01:43.320 ==> default: -- Memory Backing: hugepages: 00:01:43.320 ==> default: -- Management MAC: 00:01:43.320 ==> default: -- Loader: 00:01:43.320 ==> default: -- Nvram: 00:01:43.320 ==> default: -- Base box: spdk/fedora39 00:01:43.320 ==> default: -- Storage pool: default 00:01:43.320 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734093983_b7727d4c527c2ac2795c.img (20G) 00:01:43.320 ==> default: -- Volume Cache: default 00:01:43.320 ==> default: -- Kernel: 00:01:43.320 ==> default: -- Initrd: 00:01:43.320 ==> default: -- Graphics Type: vnc 00:01:43.320 ==> default: -- Graphics Port: -1 00:01:43.320 ==> default: -- Graphics IP: 127.0.0.1 00:01:43.320 ==> default: -- Graphics Password: Not defined 00:01:43.320 ==> default: -- Video Type: cirrus 00:01:43.320 ==> default: -- Video VRAM: 9216 00:01:43.320 ==> default: -- Sound Type: 00:01:43.320 ==> default: -- Keymap: en-us 00:01:43.320 ==> default: -- TPM Path: 00:01:43.320 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:43.320 ==> default: -- Command line args: 00:01:43.320 ==> default: -> value=-device, 00:01:43.320 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:43.320 ==> default: -> value=-drive, 00:01:43.320 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:43.320 ==> default: -> value=-device, 00:01:43.320 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.320 ==> default: -> value=-device, 00:01:43.320 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:43.320 ==> default: -> value=-drive, 00:01:43.321 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:43.321 ==> default: -> value=-device, 00:01:43.321 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.321 ==> default: -> value=-drive, 00:01:43.321 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:43.321 ==> default: -> value=-device, 00:01:43.321 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.321 ==> default: -> value=-drive, 00:01:43.321 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:43.321 ==> default: -> value=-device, 00:01:43.321 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.579 ==> default: Creating shared folders metadata... 00:01:43.579 ==> default: Starting domain. 00:01:45.490 ==> default: Waiting for domain to get an IP address... 00:02:00.368 ==> default: Waiting for SSH to become available... 00:02:01.746 ==> default: Configuring and enabling network interfaces... 00:02:05.936 default: SSH address: 192.168.121.169:22 00:02:05.936 default: SSH username: vagrant 00:02:05.936 default: SSH auth method: private key 00:02:08.471 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:16.588 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:21.865 ==> default: Mounting SSHFS shared folder... 00:02:23.241 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:23.241 ==> default: Checking Mount.. 00:02:24.177 ==> default: Folder Successfully Mounted! 00:02:24.177 ==> default: Running provisioner: file... 00:02:25.111 default: ~/.gitconfig => .gitconfig 00:02:25.678 00:02:25.678 SUCCESS! 00:02:25.678 00:02:25.678 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:25.678 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:25.678 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:25.678 00:02:25.686 [Pipeline] } 00:02:25.701 [Pipeline] // stage 00:02:25.710 [Pipeline] dir 00:02:25.711 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:25.713 [Pipeline] { 00:02:25.725 [Pipeline] catchError 00:02:25.727 [Pipeline] { 00:02:25.740 [Pipeline] sh 00:02:26.018 + vagrant ssh-config --host vagrant 00:02:26.018 + sed -ne /^Host/,$p 00:02:26.018 + tee ssh_conf 00:02:28.552 Host vagrant 00:02:28.552 HostName 192.168.121.169 00:02:28.552 User vagrant 00:02:28.552 Port 22 00:02:28.552 UserKnownHostsFile /dev/null 00:02:28.552 StrictHostKeyChecking no 00:02:28.552 PasswordAuthentication no 00:02:28.552 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:28.552 IdentitiesOnly yes 00:02:28.552 LogLevel FATAL 00:02:28.552 ForwardAgent yes 00:02:28.552 ForwardX11 yes 00:02:28.552 00:02:28.566 [Pipeline] withEnv 00:02:28.568 [Pipeline] { 00:02:28.581 [Pipeline] sh 00:02:28.859 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:28.859 source /etc/os-release 00:02:28.859 [[ -e /image.version ]] && img=$(< /image.version) 00:02:28.859 # Minimal, systemd-like check. 00:02:28.859 if [[ -e /.dockerenv ]]; then 00:02:28.859 # Clear garbage from the node's name: 00:02:28.859 # agt-er_autotest_547-896 -> autotest_547-896 00:02:28.859 # $HOSTNAME is the actual container id 00:02:28.860 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:28.860 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:28.860 # We can assume this is a mount from a host where container is running, 00:02:28.860 # so fetch its hostname to easily identify the target swarm worker. 00:02:28.860 container="$(< /etc/hostname) ($agent)" 00:02:28.860 else 00:02:28.860 # Fallback 00:02:28.860 container=$agent 00:02:28.860 fi 00:02:28.860 fi 00:02:28.860 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:28.860 00:02:29.129 [Pipeline] } 00:02:29.145 [Pipeline] // withEnv 00:02:29.153 [Pipeline] setCustomBuildProperty 00:02:29.167 [Pipeline] stage 00:02:29.169 [Pipeline] { (Tests) 00:02:29.186 [Pipeline] sh 00:02:29.491 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:29.519 [Pipeline] sh 00:02:29.799 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:30.070 [Pipeline] timeout 00:02:30.071 Timeout set to expire in 1 hr 0 min 00:02:30.073 [Pipeline] { 00:02:30.086 [Pipeline] sh 00:02:30.366 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:30.934 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:30.946 [Pipeline] sh 00:02:31.224 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:31.497 [Pipeline] sh 00:02:31.783 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:32.058 [Pipeline] sh 00:02:32.337 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:32.596 ++ readlink -f spdk_repo 00:02:32.596 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:32.596 + [[ -n /home/vagrant/spdk_repo ]] 00:02:32.596 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:32.596 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:32.596 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:32.596 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:32.596 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:32.596 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:32.596 + cd /home/vagrant/spdk_repo 00:02:32.596 + source /etc/os-release 00:02:32.596 ++ NAME='Fedora Linux' 00:02:32.596 ++ VERSION='39 (Cloud Edition)' 00:02:32.596 ++ ID=fedora 00:02:32.596 ++ VERSION_ID=39 00:02:32.596 ++ VERSION_CODENAME= 00:02:32.596 ++ PLATFORM_ID=platform:f39 00:02:32.596 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:32.596 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:32.596 ++ LOGO=fedora-logo-icon 00:02:32.596 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:32.596 ++ HOME_URL=https://fedoraproject.org/ 00:02:32.596 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:32.596 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:32.596 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:32.596 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:32.596 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:32.596 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:32.596 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:32.596 ++ SUPPORT_END=2024-11-12 00:02:32.596 ++ VARIANT='Cloud Edition' 00:02:32.596 ++ VARIANT_ID=cloud 00:02:32.596 + uname -a 00:02:32.596 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:32.596 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:32.597 Hugepages 00:02:32.597 node hugesize free / total 00:02:32.597 node0 1048576kB 0 / 0 00:02:32.597 node0 2048kB 0 / 0 00:02:32.597 00:02:32.597 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:32.597 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:32.597 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:32.597 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:32.597 + rm -f /tmp/spdk-ld-path 00:02:32.597 + source autorun-spdk.conf 00:02:32.597 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:32.597 ++ SPDK_TEST_NVMF=1 00:02:32.597 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:32.597 ++ SPDK_TEST_USDT=1 00:02:32.597 ++ SPDK_RUN_UBSAN=1 00:02:32.597 ++ SPDK_TEST_NVMF_MDNS=1 00:02:32.597 ++ NET_TYPE=virt 00:02:32.597 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:32.597 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:32.597 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:32.597 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.597 ++ RUN_NIGHTLY=1 00:02:32.597 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:32.597 + [[ -n '' ]] 00:02:32.597 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:32.856 + for M in /var/spdk/build-*-manifest.txt 00:02:32.856 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:32.856 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.856 + for M in /var/spdk/build-*-manifest.txt 00:02:32.856 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:32.856 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.856 + for M in /var/spdk/build-*-manifest.txt 00:02:32.856 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:32.856 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.856 ++ uname 00:02:32.856 + [[ Linux == \L\i\n\u\x ]] 00:02:32.856 + sudo dmesg -T 00:02:32.856 + sudo dmesg --clear 00:02:32.856 + dmesg_pid=5964 00:02:32.856 + [[ Fedora Linux == FreeBSD ]] 00:02:32.856 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:32.856 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:32.856 + sudo dmesg -Tw 00:02:32.856 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:32.856 + [[ -x /usr/src/fio-static/fio ]] 00:02:32.856 + export FIO_BIN=/usr/src/fio-static/fio 00:02:32.856 + FIO_BIN=/usr/src/fio-static/fio 00:02:32.856 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:32.856 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:32.856 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:32.856 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:32.856 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:32.856 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:32.856 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:32.856 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:32.856 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:32.856 Test configuration: 00:02:32.856 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:32.856 SPDK_TEST_NVMF=1 00:02:32.856 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:32.856 SPDK_TEST_USDT=1 00:02:32.856 SPDK_RUN_UBSAN=1 00:02:32.856 SPDK_TEST_NVMF_MDNS=1 00:02:32.856 NET_TYPE=virt 00:02:32.856 SPDK_JSONRPC_GO_CLIENT=1 00:02:32.856 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:32.856 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:32.856 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.856 RUN_NIGHTLY=1 12:47:13 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:32.856 12:47:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:32.856 12:47:13 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:32.856 12:47:13 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.856 12:47:13 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.856 12:47:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.856 12:47:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.856 12:47:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.856 12:47:13 -- paths/export.sh@5 -- $ export PATH 00:02:32.856 12:47:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.856 12:47:13 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:32.856 12:47:13 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:32.856 12:47:13 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734094033.XXXXXX 00:02:32.856 12:47:13 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734094033.8Brlu8 00:02:32.856 12:47:13 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:32.856 12:47:13 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:32.856 12:47:13 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:32.856 12:47:13 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:32.856 12:47:13 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:32.856 12:47:13 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:32.856 12:47:13 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:32.856 12:47:13 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:32.856 12:47:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.856 12:47:13 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:32.856 12:47:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:32.856 12:47:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:32.856 12:47:13 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:32.856 12:47:13 -- spdk/autobuild.sh@16 -- $ date -u 00:02:32.856 Fri Dec 13 12:47:13 PM UTC 2024 00:02:32.856 12:47:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:32.856 LTS-67-gc13c99a5e 00:02:33.115 12:47:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:33.115 12:47:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:33.115 12:47:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:33.115 12:47:13 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:33.115 12:47:13 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:33.115 12:47:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.115 ************************************ 00:02:33.115 START TEST ubsan 00:02:33.115 ************************************ 00:02:33.115 using ubsan 00:02:33.115 12:47:13 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:33.115 00:02:33.115 real 0m0.000s 00:02:33.115 user 0m0.000s 00:02:33.115 sys 0m0.000s 00:02:33.115 12:47:13 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:33.115 12:47:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.115 ************************************ 00:02:33.115 END TEST ubsan 00:02:33.115 ************************************ 00:02:33.115 12:47:13 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:33.115 12:47:13 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:33.115 12:47:13 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:33.115 12:47:13 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:33.115 12:47:13 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:33.115 12:47:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.115 ************************************ 00:02:33.115 START TEST build_native_dpdk 00:02:33.115 ************************************ 00:02:33.115 12:47:13 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:33.115 12:47:13 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:33.115 12:47:13 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:33.115 12:47:13 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:33.115 12:47:13 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:33.115 12:47:13 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:33.116 12:47:13 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:33.116 12:47:13 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:33.116 12:47:13 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:33.116 12:47:13 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:33.116 12:47:13 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:33.116 12:47:13 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:33.116 12:47:13 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:33.116 12:47:13 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:33.116 12:47:13 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:33.116 12:47:13 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:33.116 12:47:13 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:33.116 12:47:13 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:33.116 caf0f5d395 version: 22.11.4 00:02:33.116 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:33.116 dc9c799c7d vhost: fix missing spinlock unlock 00:02:33.116 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:33.116 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:33.116 12:47:13 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:33.116 12:47:13 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:33.116 12:47:13 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:33.116 12:47:13 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:33.116 12:47:13 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:33.116 12:47:13 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:33.116 12:47:13 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:33.116 12:47:13 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:33.116 12:47:13 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:33.116 12:47:13 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:33.116 12:47:13 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:33.116 12:47:13 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:33.116 12:47:13 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:33.116 12:47:13 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:33.116 12:47:13 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:33.116 12:47:13 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:33.116 12:47:13 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:33.116 12:47:13 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:33.116 12:47:13 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:33.116 12:47:13 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:33.116 12:47:13 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:33.116 12:47:13 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:33.116 12:47:13 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:33.116 12:47:13 -- scripts/common.sh@343 -- $ case "$op" in 00:02:33.116 12:47:13 -- scripts/common.sh@344 -- $ : 1 00:02:33.116 12:47:13 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:33.116 12:47:13 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:33.116 12:47:13 -- scripts/common.sh@364 -- $ decimal 22 00:02:33.116 12:47:13 -- scripts/common.sh@352 -- $ local d=22 00:02:33.116 12:47:13 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:33.116 12:47:13 -- scripts/common.sh@354 -- $ echo 22 00:02:33.116 12:47:13 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:33.116 12:47:13 -- scripts/common.sh@365 -- $ decimal 21 00:02:33.116 12:47:13 -- scripts/common.sh@352 -- $ local d=21 00:02:33.116 12:47:13 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:33.116 12:47:13 -- scripts/common.sh@354 -- $ echo 21 00:02:33.116 12:47:13 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:33.116 12:47:13 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:33.116 12:47:13 -- scripts/common.sh@366 -- $ return 1 00:02:33.116 12:47:13 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:33.116 patching file config/rte_config.h 00:02:33.116 Hunk #1 succeeded at 60 (offset 1 line). 00:02:33.116 12:47:13 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:33.116 12:47:13 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:33.116 12:47:13 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:33.116 12:47:13 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:33.116 12:47:13 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:33.116 12:47:13 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:33.116 12:47:13 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:33.116 12:47:13 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:33.116 12:47:13 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:33.116 12:47:13 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:33.116 12:47:13 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:33.116 12:47:13 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:33.116 12:47:13 -- scripts/common.sh@343 -- $ case "$op" in 00:02:33.116 12:47:13 -- scripts/common.sh@344 -- $ : 1 00:02:33.116 12:47:13 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:33.116 12:47:13 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:33.116 12:47:13 -- scripts/common.sh@364 -- $ decimal 22 00:02:33.116 12:47:13 -- scripts/common.sh@352 -- $ local d=22 00:02:33.116 12:47:13 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:33.116 12:47:13 -- scripts/common.sh@354 -- $ echo 22 00:02:33.116 12:47:13 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:33.116 12:47:13 -- scripts/common.sh@365 -- $ decimal 24 00:02:33.116 12:47:13 -- scripts/common.sh@352 -- $ local d=24 00:02:33.116 12:47:13 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:33.116 12:47:13 -- scripts/common.sh@354 -- $ echo 24 00:02:33.116 12:47:13 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:33.116 12:47:13 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:33.116 12:47:13 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:33.116 12:47:13 -- scripts/common.sh@367 -- $ return 0 00:02:33.116 12:47:13 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:33.116 patching file lib/pcapng/rte_pcapng.c 00:02:33.116 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:33.116 12:47:13 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:33.116 12:47:13 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:33.116 12:47:13 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:33.116 12:47:13 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:33.116 12:47:13 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:38.429 The Meson build system 00:02:38.429 Version: 1.5.0 00:02:38.429 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:38.429 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:38.429 Build type: native build 00:02:38.429 Program cat found: YES (/usr/bin/cat) 00:02:38.429 Project name: DPDK 00:02:38.429 Project version: 22.11.4 00:02:38.429 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:38.429 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:38.429 Host machine cpu family: x86_64 00:02:38.429 Host machine cpu: x86_64 00:02:38.429 Message: ## Building in Developer Mode ## 00:02:38.429 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:38.429 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:38.429 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:38.429 Program objdump found: YES (/usr/bin/objdump) 00:02:38.429 Program python3 found: YES (/usr/bin/python3) 00:02:38.429 Program cat found: YES (/usr/bin/cat) 00:02:38.429 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:38.429 Checking for size of "void *" : 8 00:02:38.429 Checking for size of "void *" : 8 (cached) 00:02:38.429 Library m found: YES 00:02:38.429 Library numa found: YES 00:02:38.429 Has header "numaif.h" : YES 00:02:38.429 Library fdt found: NO 00:02:38.429 Library execinfo found: NO 00:02:38.429 Has header "execinfo.h" : YES 00:02:38.429 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:38.429 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:38.429 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:38.429 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:38.429 Run-time dependency openssl found: YES 3.1.1 00:02:38.429 Run-time dependency libpcap found: YES 1.10.4 00:02:38.429 Has header "pcap.h" with dependency libpcap: YES 00:02:38.429 Compiler for C supports arguments -Wcast-qual: YES 00:02:38.429 Compiler for C supports arguments -Wdeprecated: YES 00:02:38.429 Compiler for C supports arguments -Wformat: YES 00:02:38.429 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:38.429 Compiler for C supports arguments -Wformat-security: NO 00:02:38.429 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:38.429 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:38.429 Compiler for C supports arguments -Wnested-externs: YES 00:02:38.429 Compiler for C supports arguments -Wold-style-definition: YES 00:02:38.429 Compiler for C supports arguments -Wpointer-arith: YES 00:02:38.429 Compiler for C supports arguments -Wsign-compare: YES 00:02:38.429 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:38.429 Compiler for C supports arguments -Wundef: YES 00:02:38.429 Compiler for C supports arguments -Wwrite-strings: YES 00:02:38.429 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:38.429 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:38.429 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:38.429 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:38.429 Compiler for C supports arguments -mavx512f: YES 00:02:38.429 Checking if "AVX512 checking" compiles: YES 00:02:38.429 Fetching value of define "__SSE4_2__" : 1 00:02:38.429 Fetching value of define "__AES__" : 1 00:02:38.429 Fetching value of define "__AVX__" : 1 00:02:38.429 Fetching value of define "__AVX2__" : 1 00:02:38.429 Fetching value of define "__AVX512BW__" : (undefined) 00:02:38.429 Fetching value of define "__AVX512CD__" : (undefined) 00:02:38.429 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:38.429 Fetching value of define "__AVX512F__" : (undefined) 00:02:38.429 Fetching value of define "__AVX512VL__" : (undefined) 00:02:38.429 Fetching value of define "__PCLMUL__" : 1 00:02:38.429 Fetching value of define "__RDRND__" : 1 00:02:38.429 Fetching value of define "__RDSEED__" : 1 00:02:38.429 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:38.429 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:38.429 Message: lib/kvargs: Defining dependency "kvargs" 00:02:38.429 Message: lib/telemetry: Defining dependency "telemetry" 00:02:38.429 Checking for function "getentropy" : YES 00:02:38.429 Message: lib/eal: Defining dependency "eal" 00:02:38.429 Message: lib/ring: Defining dependency "ring" 00:02:38.429 Message: lib/rcu: Defining dependency "rcu" 00:02:38.429 Message: lib/mempool: Defining dependency "mempool" 00:02:38.429 Message: lib/mbuf: Defining dependency "mbuf" 00:02:38.429 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:38.429 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.429 Compiler for C supports arguments -mpclmul: YES 00:02:38.429 Compiler for C supports arguments -maes: YES 00:02:38.429 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:38.429 Compiler for C supports arguments -mavx512bw: YES 00:02:38.429 Compiler for C supports arguments -mavx512dq: YES 00:02:38.429 Compiler for C supports arguments -mavx512vl: YES 00:02:38.429 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:38.429 Compiler for C supports arguments -mavx2: YES 00:02:38.429 Compiler for C supports arguments -mavx: YES 00:02:38.429 Message: lib/net: Defining dependency "net" 00:02:38.429 Message: lib/meter: Defining dependency "meter" 00:02:38.429 Message: lib/ethdev: Defining dependency "ethdev" 00:02:38.429 Message: lib/pci: Defining dependency "pci" 00:02:38.429 Message: lib/cmdline: Defining dependency "cmdline" 00:02:38.429 Message: lib/metrics: Defining dependency "metrics" 00:02:38.429 Message: lib/hash: Defining dependency "hash" 00:02:38.429 Message: lib/timer: Defining dependency "timer" 00:02:38.429 Fetching value of define "__AVX2__" : 1 (cached) 00:02:38.429 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.429 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:38.429 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:38.429 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:38.429 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:38.429 Message: lib/acl: Defining dependency "acl" 00:02:38.429 Message: lib/bbdev: Defining dependency "bbdev" 00:02:38.429 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:38.429 Run-time dependency libelf found: YES 0.191 00:02:38.429 Message: lib/bpf: Defining dependency "bpf" 00:02:38.429 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:38.429 Message: lib/compressdev: Defining dependency "compressdev" 00:02:38.429 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:38.429 Message: lib/distributor: Defining dependency "distributor" 00:02:38.429 Message: lib/efd: Defining dependency "efd" 00:02:38.429 Message: lib/eventdev: Defining dependency "eventdev" 00:02:38.429 Message: lib/gpudev: Defining dependency "gpudev" 00:02:38.429 Message: lib/gro: Defining dependency "gro" 00:02:38.429 Message: lib/gso: Defining dependency "gso" 00:02:38.429 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:38.429 Message: lib/jobstats: Defining dependency "jobstats" 00:02:38.429 Message: lib/latencystats: Defining dependency "latencystats" 00:02:38.429 Message: lib/lpm: Defining dependency "lpm" 00:02:38.429 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.429 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:38.429 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:38.429 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:38.429 Message: lib/member: Defining dependency "member" 00:02:38.429 Message: lib/pcapng: Defining dependency "pcapng" 00:02:38.429 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:38.429 Message: lib/power: Defining dependency "power" 00:02:38.429 Message: lib/rawdev: Defining dependency "rawdev" 00:02:38.429 Message: lib/regexdev: Defining dependency "regexdev" 00:02:38.429 Message: lib/dmadev: Defining dependency "dmadev" 00:02:38.429 Message: lib/rib: Defining dependency "rib" 00:02:38.429 Message: lib/reorder: Defining dependency "reorder" 00:02:38.429 Message: lib/sched: Defining dependency "sched" 00:02:38.429 Message: lib/security: Defining dependency "security" 00:02:38.429 Message: lib/stack: Defining dependency "stack" 00:02:38.429 Has header "linux/userfaultfd.h" : YES 00:02:38.429 Message: lib/vhost: Defining dependency "vhost" 00:02:38.429 Message: lib/ipsec: Defining dependency "ipsec" 00:02:38.429 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.429 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:38.429 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:38.429 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:38.429 Message: lib/fib: Defining dependency "fib" 00:02:38.429 Message: lib/port: Defining dependency "port" 00:02:38.429 Message: lib/pdump: Defining dependency "pdump" 00:02:38.429 Message: lib/table: Defining dependency "table" 00:02:38.429 Message: lib/pipeline: Defining dependency "pipeline" 00:02:38.429 Message: lib/graph: Defining dependency "graph" 00:02:38.429 Message: lib/node: Defining dependency "node" 00:02:38.429 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:38.429 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:38.429 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:38.429 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:38.430 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:38.430 Compiler for C supports arguments -Wno-unused-value: YES 00:02:38.430 Compiler for C supports arguments -Wno-format: YES 00:02:38.430 Compiler for C supports arguments -Wno-format-security: YES 00:02:38.430 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:39.806 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.806 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:39.806 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:39.806 Fetching value of define "__AVX2__" : 1 (cached) 00:02:39.806 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.806 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.806 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:39.806 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:39.806 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:39.806 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:39.806 Configuring doxy-api.conf using configuration 00:02:39.806 Program sphinx-build found: NO 00:02:39.806 Configuring rte_build_config.h using configuration 00:02:39.806 Message: 00:02:39.806 ================= 00:02:39.806 Applications Enabled 00:02:39.806 ================= 00:02:39.806 00:02:39.806 apps: 00:02:39.806 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:39.806 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:39.806 test-security-perf, 00:02:39.806 00:02:39.806 Message: 00:02:39.806 ================= 00:02:39.806 Libraries Enabled 00:02:39.806 ================= 00:02:39.806 00:02:39.806 libs: 00:02:39.806 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:39.806 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:39.806 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:39.806 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:39.806 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:39.806 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:39.806 table, pipeline, graph, node, 00:02:39.806 00:02:39.806 Message: 00:02:39.806 =============== 00:02:39.806 Drivers Enabled 00:02:39.806 =============== 00:02:39.806 00:02:39.806 common: 00:02:39.806 00:02:39.806 bus: 00:02:39.806 pci, vdev, 00:02:39.806 mempool: 00:02:39.806 ring, 00:02:39.806 dma: 00:02:39.806 00:02:39.806 net: 00:02:39.806 i40e, 00:02:39.806 raw: 00:02:39.806 00:02:39.806 crypto: 00:02:39.806 00:02:39.806 compress: 00:02:39.806 00:02:39.806 regex: 00:02:39.806 00:02:39.806 vdpa: 00:02:39.806 00:02:39.806 event: 00:02:39.806 00:02:39.806 baseband: 00:02:39.806 00:02:39.806 gpu: 00:02:39.806 00:02:39.806 00:02:39.806 Message: 00:02:39.806 ================= 00:02:39.806 Content Skipped 00:02:39.806 ================= 00:02:39.806 00:02:39.806 apps: 00:02:39.806 00:02:39.806 libs: 00:02:39.806 kni: explicitly disabled via build config (deprecated lib) 00:02:39.806 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:39.806 00:02:39.806 drivers: 00:02:39.806 common/cpt: not in enabled drivers build config 00:02:39.806 common/dpaax: not in enabled drivers build config 00:02:39.806 common/iavf: not in enabled drivers build config 00:02:39.806 common/idpf: not in enabled drivers build config 00:02:39.806 common/mvep: not in enabled drivers build config 00:02:39.806 common/octeontx: not in enabled drivers build config 00:02:39.806 bus/auxiliary: not in enabled drivers build config 00:02:39.806 bus/dpaa: not in enabled drivers build config 00:02:39.806 bus/fslmc: not in enabled drivers build config 00:02:39.806 bus/ifpga: not in enabled drivers build config 00:02:39.806 bus/vmbus: not in enabled drivers build config 00:02:39.806 common/cnxk: not in enabled drivers build config 00:02:39.806 common/mlx5: not in enabled drivers build config 00:02:39.806 common/qat: not in enabled drivers build config 00:02:39.806 common/sfc_efx: not in enabled drivers build config 00:02:39.806 mempool/bucket: not in enabled drivers build config 00:02:39.806 mempool/cnxk: not in enabled drivers build config 00:02:39.806 mempool/dpaa: not in enabled drivers build config 00:02:39.806 mempool/dpaa2: not in enabled drivers build config 00:02:39.806 mempool/octeontx: not in enabled drivers build config 00:02:39.806 mempool/stack: not in enabled drivers build config 00:02:39.806 dma/cnxk: not in enabled drivers build config 00:02:39.806 dma/dpaa: not in enabled drivers build config 00:02:39.806 dma/dpaa2: not in enabled drivers build config 00:02:39.806 dma/hisilicon: not in enabled drivers build config 00:02:39.806 dma/idxd: not in enabled drivers build config 00:02:39.806 dma/ioat: not in enabled drivers build config 00:02:39.806 dma/skeleton: not in enabled drivers build config 00:02:39.806 net/af_packet: not in enabled drivers build config 00:02:39.806 net/af_xdp: not in enabled drivers build config 00:02:39.806 net/ark: not in enabled drivers build config 00:02:39.806 net/atlantic: not in enabled drivers build config 00:02:39.806 net/avp: not in enabled drivers build config 00:02:39.806 net/axgbe: not in enabled drivers build config 00:02:39.806 net/bnx2x: not in enabled drivers build config 00:02:39.806 net/bnxt: not in enabled drivers build config 00:02:39.806 net/bonding: not in enabled drivers build config 00:02:39.806 net/cnxk: not in enabled drivers build config 00:02:39.806 net/cxgbe: not in enabled drivers build config 00:02:39.806 net/dpaa: not in enabled drivers build config 00:02:39.806 net/dpaa2: not in enabled drivers build config 00:02:39.806 net/e1000: not in enabled drivers build config 00:02:39.806 net/ena: not in enabled drivers build config 00:02:39.806 net/enetc: not in enabled drivers build config 00:02:39.806 net/enetfec: not in enabled drivers build config 00:02:39.806 net/enic: not in enabled drivers build config 00:02:39.806 net/failsafe: not in enabled drivers build config 00:02:39.806 net/fm10k: not in enabled drivers build config 00:02:39.806 net/gve: not in enabled drivers build config 00:02:39.806 net/hinic: not in enabled drivers build config 00:02:39.806 net/hns3: not in enabled drivers build config 00:02:39.806 net/iavf: not in enabled drivers build config 00:02:39.806 net/ice: not in enabled drivers build config 00:02:39.806 net/idpf: not in enabled drivers build config 00:02:39.806 net/igc: not in enabled drivers build config 00:02:39.806 net/ionic: not in enabled drivers build config 00:02:39.806 net/ipn3ke: not in enabled drivers build config 00:02:39.806 net/ixgbe: not in enabled drivers build config 00:02:39.806 net/kni: not in enabled drivers build config 00:02:39.806 net/liquidio: not in enabled drivers build config 00:02:39.806 net/mana: not in enabled drivers build config 00:02:39.807 net/memif: not in enabled drivers build config 00:02:39.807 net/mlx4: not in enabled drivers build config 00:02:39.807 net/mlx5: not in enabled drivers build config 00:02:39.807 net/mvneta: not in enabled drivers build config 00:02:39.807 net/mvpp2: not in enabled drivers build config 00:02:39.807 net/netvsc: not in enabled drivers build config 00:02:39.807 net/nfb: not in enabled drivers build config 00:02:39.807 net/nfp: not in enabled drivers build config 00:02:39.807 net/ngbe: not in enabled drivers build config 00:02:39.807 net/null: not in enabled drivers build config 00:02:39.807 net/octeontx: not in enabled drivers build config 00:02:39.807 net/octeon_ep: not in enabled drivers build config 00:02:39.807 net/pcap: not in enabled drivers build config 00:02:39.807 net/pfe: not in enabled drivers build config 00:02:39.807 net/qede: not in enabled drivers build config 00:02:39.807 net/ring: not in enabled drivers build config 00:02:39.807 net/sfc: not in enabled drivers build config 00:02:39.807 net/softnic: not in enabled drivers build config 00:02:39.807 net/tap: not in enabled drivers build config 00:02:39.807 net/thunderx: not in enabled drivers build config 00:02:39.807 net/txgbe: not in enabled drivers build config 00:02:39.807 net/vdev_netvsc: not in enabled drivers build config 00:02:39.807 net/vhost: not in enabled drivers build config 00:02:39.807 net/virtio: not in enabled drivers build config 00:02:39.807 net/vmxnet3: not in enabled drivers build config 00:02:39.807 raw/cnxk_bphy: not in enabled drivers build config 00:02:39.807 raw/cnxk_gpio: not in enabled drivers build config 00:02:39.807 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:39.807 raw/ifpga: not in enabled drivers build config 00:02:39.807 raw/ntb: not in enabled drivers build config 00:02:39.807 raw/skeleton: not in enabled drivers build config 00:02:39.807 crypto/armv8: not in enabled drivers build config 00:02:39.807 crypto/bcmfs: not in enabled drivers build config 00:02:39.807 crypto/caam_jr: not in enabled drivers build config 00:02:39.807 crypto/ccp: not in enabled drivers build config 00:02:39.807 crypto/cnxk: not in enabled drivers build config 00:02:39.807 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.807 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.807 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.807 crypto/mlx5: not in enabled drivers build config 00:02:39.807 crypto/mvsam: not in enabled drivers build config 00:02:39.807 crypto/nitrox: not in enabled drivers build config 00:02:39.807 crypto/null: not in enabled drivers build config 00:02:39.807 crypto/octeontx: not in enabled drivers build config 00:02:39.807 crypto/openssl: not in enabled drivers build config 00:02:39.807 crypto/scheduler: not in enabled drivers build config 00:02:39.807 crypto/uadk: not in enabled drivers build config 00:02:39.807 crypto/virtio: not in enabled drivers build config 00:02:39.807 compress/isal: not in enabled drivers build config 00:02:39.807 compress/mlx5: not in enabled drivers build config 00:02:39.807 compress/octeontx: not in enabled drivers build config 00:02:39.807 compress/zlib: not in enabled drivers build config 00:02:39.807 regex/mlx5: not in enabled drivers build config 00:02:39.807 regex/cn9k: not in enabled drivers build config 00:02:39.807 vdpa/ifc: not in enabled drivers build config 00:02:39.807 vdpa/mlx5: not in enabled drivers build config 00:02:39.807 vdpa/sfc: not in enabled drivers build config 00:02:39.807 event/cnxk: not in enabled drivers build config 00:02:39.807 event/dlb2: not in enabled drivers build config 00:02:39.807 event/dpaa: not in enabled drivers build config 00:02:39.807 event/dpaa2: not in enabled drivers build config 00:02:39.807 event/dsw: not in enabled drivers build config 00:02:39.807 event/opdl: not in enabled drivers build config 00:02:39.807 event/skeleton: not in enabled drivers build config 00:02:39.807 event/sw: not in enabled drivers build config 00:02:39.807 event/octeontx: not in enabled drivers build config 00:02:39.807 baseband/acc: not in enabled drivers build config 00:02:39.807 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:39.807 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:39.807 baseband/la12xx: not in enabled drivers build config 00:02:39.807 baseband/null: not in enabled drivers build config 00:02:39.807 baseband/turbo_sw: not in enabled drivers build config 00:02:39.807 gpu/cuda: not in enabled drivers build config 00:02:39.807 00:02:39.807 00:02:39.807 Build targets in project: 314 00:02:39.807 00:02:39.807 DPDK 22.11.4 00:02:39.807 00:02:39.807 User defined options 00:02:39.807 libdir : lib 00:02:39.807 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:39.807 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:39.807 c_link_args : 00:02:39.807 enable_docs : false 00:02:39.807 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.807 enable_kmods : false 00:02:39.807 machine : native 00:02:39.807 tests : false 00:02:39.807 00:02:39.807 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.807 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:39.807 12:47:20 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:40.065 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:40.065 [1/743] Generating lib/rte_kvargs_def with a custom command 00:02:40.065 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:40.065 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:40.065 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:40.065 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:40.065 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:40.065 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:40.065 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:40.065 [9/743] Linking static target lib/librte_kvargs.a 00:02:40.065 [10/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:40.065 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:40.323 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:40.323 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:40.323 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:40.323 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:40.323 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.323 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:40.323 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:40.323 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.323 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.323 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:40.323 [22/743] Linking target lib/librte_kvargs.so.23.0 00:02:40.323 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:40.582 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.582 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.582 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.582 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.582 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.582 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.582 [30/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:40.582 [31/743] Linking static target lib/librte_telemetry.a 00:02:40.582 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.582 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.582 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:40.582 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.840 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.840 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.840 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:40.840 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.840 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.840 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.840 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.098 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.098 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.098 [45/743] Linking target lib/librte_telemetry.so.23.0 00:02:41.098 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.098 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.098 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.098 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:41.098 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.098 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.098 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.098 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.098 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.355 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.355 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.355 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:41.356 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:41.356 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:41.356 [60/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.356 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:41.356 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:41.356 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.356 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:41.356 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:41.356 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:41.356 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:41.356 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:41.356 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:41.614 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.614 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:41.614 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:41.614 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:41.614 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:41.614 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:41.614 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:41.614 [77/743] Generating lib/rte_eal_def with a custom command 00:02:41.614 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:41.614 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:41.614 [80/743] Generating lib/rte_ring_def with a custom command 00:02:41.614 [81/743] Generating lib/rte_ring_mingw with a custom command 00:02:41.614 [82/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:41.614 [83/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:41.614 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:41.614 [85/743] Generating lib/rte_rcu_def with a custom command 00:02:41.614 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:41.873 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:41.873 [88/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:41.873 [89/743] Linking static target lib/librte_ring.a 00:02:41.873 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:41.873 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:41.873 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:41.873 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.131 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.131 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.131 [96/743] Linking static target lib/librte_eal.a 00:02:42.388 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:42.388 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:42.388 [99/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:42.388 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:42.388 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:42.388 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:42.388 [103/743] Linking static target lib/librte_rcu.a 00:02:42.388 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:42.647 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.647 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:42.647 [107/743] Linking static target lib/librte_mempool.a 00:02:42.647 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.905 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.905 [110/743] Generating lib/rte_net_def with a custom command 00:02:42.905 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:42.905 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:42.905 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:42.905 [114/743] Generating lib/rte_meter_def with a custom command 00:02:42.905 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:42.905 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.905 [117/743] Linking static target lib/librte_meter.a 00:02:43.164 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.165 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.165 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.165 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.165 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.423 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:43.423 [124/743] Linking static target lib/librte_mbuf.a 00:02:43.423 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.423 [126/743] Linking static target lib/librte_net.a 00:02:43.423 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.681 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.681 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.940 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:43.940 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.940 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.940 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.940 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.198 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.456 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.456 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:44.456 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:44.456 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.714 [140/743] Generating lib/rte_pci_def with a custom command 00:02:44.714 [141/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.714 [142/743] Generating lib/rte_pci_mingw with a custom command 00:02:44.714 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.714 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.714 [145/743] Linking static target lib/librte_pci.a 00:02:44.714 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.714 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.714 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.714 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.973 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.973 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.973 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.973 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.973 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:44.973 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:44.973 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:44.973 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:44.973 [158/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.973 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:44.973 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.973 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:44.973 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:45.231 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:45.231 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.231 [165/743] Generating lib/rte_hash_def with a custom command 00:02:45.231 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.231 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:45.231 [168/743] Generating lib/rte_timer_def with a custom command 00:02:45.231 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:45.231 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.231 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.231 [172/743] Linking static target lib/librte_cmdline.a 00:02:45.489 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.489 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:45.747 [175/743] Linking static target lib/librte_metrics.a 00:02:45.747 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.747 [177/743] Linking static target lib/librte_timer.a 00:02:46.006 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.006 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.006 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.264 [181/743] Linking static target lib/librte_ethdev.a 00:02:46.264 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.264 [183/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:46.264 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.868 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:46.868 [186/743] Generating lib/rte_acl_def with a custom command 00:02:46.868 [187/743] Generating lib/rte_acl_mingw with a custom command 00:02:46.868 [188/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:46.868 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:46.868 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:46.868 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:46.868 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:46.868 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:47.127 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:47.694 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:47.694 [196/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:47.694 [197/743] Linking static target lib/librte_bitratestats.a 00:02:47.694 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.694 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:47.694 [200/743] Linking static target lib/librte_bbdev.a 00:02:47.694 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:47.952 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.952 [203/743] Linking static target lib/librte_hash.a 00:02:48.211 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:48.211 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:02:48.211 [206/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:48.211 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.469 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:48.469 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:48.728 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.728 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:48.728 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:48.728 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:48.728 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:48.728 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:48.728 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:48.987 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:48.987 [218/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:48.987 [219/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:48.987 [220/743] Linking static target lib/librte_cfgfile.a 00:02:48.987 [221/743] Linking static target lib/librte_acl.a 00:02:48.987 [222/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.246 [223/743] Linking target lib/librte_eal.so.23.0 00:02:49.246 [224/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:49.246 [225/743] Generating lib/rte_compressdev_def with a custom command 00:02:49.246 [226/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:49.246 [227/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:49.246 [228/743] Linking target lib/librte_ring.so.23.0 00:02:49.246 [229/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.246 [230/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.246 [231/743] Linking target lib/librte_meter.so.23.0 00:02:49.505 [232/743] Linking target lib/librte_pci.so.23.0 00:02:49.505 [233/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:49.505 [234/743] Linking target lib/librte_rcu.so.23.0 00:02:49.505 [235/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:49.505 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:49.505 [237/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:49.505 [238/743] Linking target lib/librte_timer.so.23.0 00:02:49.505 [239/743] Linking target lib/librte_mempool.so.23.0 00:02:49.505 [240/743] Linking target lib/librte_acl.so.23.0 00:02:49.505 [241/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:49.505 [242/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:49.505 [243/743] Linking target lib/librte_cfgfile.so.23.0 00:02:49.505 [244/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:49.764 [245/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:49.764 [246/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:49.764 [247/743] Generating lib/rte_cryptodev_def with a custom command 00:02:49.764 [248/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:49.764 [249/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:49.764 [250/743] Linking target lib/librte_mbuf.so.23.0 00:02:49.764 [251/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:49.764 [252/743] Linking static target lib/librte_bpf.a 00:02:49.764 [253/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:49.764 [254/743] Linking static target lib/librte_compressdev.a 00:02:49.764 [255/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:49.764 [256/743] Linking target lib/librte_net.so.23.0 00:02:49.764 [257/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:50.022 [258/743] Linking target lib/librte_bbdev.so.23.0 00:02:50.022 [259/743] Generating lib/rte_distributor_def with a custom command 00:02:50.022 [260/743] Generating lib/rte_distributor_mingw with a custom command 00:02:50.022 [261/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:50.022 [262/743] Linking target lib/librte_cmdline.so.23.0 00:02:50.022 [263/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.022 [264/743] Generating lib/rte_efd_def with a custom command 00:02:50.022 [265/743] Linking target lib/librte_hash.so.23.0 00:02:50.022 [266/743] Generating lib/rte_efd_mingw with a custom command 00:02:50.281 [267/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:50.281 [268/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:50.281 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:50.539 [270/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:50.539 [271/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.539 [272/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.797 [273/743] Linking target lib/librte_ethdev.so.23.0 00:02:50.797 [274/743] Linking target lib/librte_compressdev.so.23.0 00:02:50.797 [275/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:50.797 [276/743] Linking static target lib/librte_distributor.a 00:02:50.797 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:50.797 [278/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:50.797 [279/743] Linking target lib/librte_metrics.so.23.0 00:02:50.797 [280/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.055 [281/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:51.055 [282/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:51.055 [283/743] Linking target lib/librte_bpf.so.23.0 00:02:51.055 [284/743] Linking target lib/librte_distributor.so.23.0 00:02:51.055 [285/743] Linking target lib/librte_bitratestats.so.23.0 00:02:51.055 [286/743] Generating lib/rte_eventdev_def with a custom command 00:02:51.055 [287/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:51.055 [288/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:51.055 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:51.055 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:51.313 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:51.572 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:51.572 [293/743] Linking static target lib/librte_efd.a 00:02:51.830 [294/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.830 [295/743] Linking target lib/librte_efd.so.23.0 00:02:51.830 [296/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:51.830 [297/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:52.089 [298/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:52.089 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:52.089 [300/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:52.089 [301/743] Generating lib/rte_gro_def with a custom command 00:02:52.089 [302/743] Linking static target lib/librte_gpudev.a 00:02:52.089 [303/743] Generating lib/rte_gro_mingw with a custom command 00:02:52.089 [304/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:52.089 [305/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.089 [306/743] Linking static target lib/librte_cryptodev.a 00:02:52.347 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:52.606 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:52.606 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:52.606 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:52.606 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:52.606 [312/743] Generating lib/rte_gso_def with a custom command 00:02:52.606 [313/743] Generating lib/rte_gso_mingw with a custom command 00:02:52.865 [314/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.865 [315/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:52.865 [316/743] Linking target lib/librte_gpudev.so.23.0 00:02:52.865 [317/743] Linking static target lib/librte_gro.a 00:02:52.865 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:53.123 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.123 [320/743] Linking target lib/librte_gro.so.23.0 00:02:53.123 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:53.123 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:53.123 [323/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:53.123 [324/743] Linking static target lib/librte_eventdev.a 00:02:53.123 [325/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:53.123 [326/743] Generating lib/rte_ip_frag_def with a custom command 00:02:53.382 [327/743] Linking static target lib/librte_gso.a 00:02:53.382 [328/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:53.382 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:53.382 [330/743] Linking static target lib/librte_jobstats.a 00:02:53.382 [331/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.382 [332/743] Generating lib/rte_jobstats_def with a custom command 00:02:53.382 [333/743] Linking target lib/librte_gso.so.23.0 00:02:53.382 [334/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:53.382 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:53.640 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:53.640 [337/743] Generating lib/rte_latencystats_def with a custom command 00:02:53.640 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:53.640 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:53.640 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:53.640 [341/743] Generating lib/rte_lpm_def with a custom command 00:02:53.640 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:02:53.640 [343/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.898 [344/743] Linking target lib/librte_jobstats.so.23.0 00:02:53.898 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:53.898 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:53.898 [347/743] Linking static target lib/librte_ip_frag.a 00:02:54.157 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.157 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:54.157 [350/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.157 [351/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:54.157 [352/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:54.415 [353/743] Linking static target lib/librte_latencystats.a 00:02:54.415 [354/743] Linking target lib/librte_ip_frag.so.23.0 00:02:54.415 [355/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:54.415 [356/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:54.415 [357/743] Generating lib/rte_member_def with a custom command 00:02:54.415 [358/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:54.415 [359/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:54.416 [360/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:54.416 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:54.416 [362/743] Generating lib/rte_member_mingw with a custom command 00:02:54.416 [363/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:54.416 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.673 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:54.673 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:54.673 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:54.673 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:54.673 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:54.931 [370/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:54.931 [371/743] Linking static target lib/librte_lpm.a 00:02:54.931 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:54.931 [373/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:55.189 [374/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.189 [375/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:55.189 [376/743] Generating lib/rte_power_def with a custom command 00:02:55.189 [377/743] Generating lib/rte_power_mingw with a custom command 00:02:55.189 [378/743] Linking target lib/librte_eventdev.so.23.0 00:02:55.189 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:55.189 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:55.189 [381/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:55.189 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:55.189 [383/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.189 [384/743] Generating lib/rte_regexdev_def with a custom command 00:02:55.447 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:55.447 [386/743] Linking target lib/librte_lpm.so.23.0 00:02:55.447 [387/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:55.447 [388/743] Generating lib/rte_dmadev_def with a custom command 00:02:55.447 [389/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:55.447 [390/743] Linking static target lib/librte_pcapng.a 00:02:55.447 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:55.447 [392/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:55.447 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:55.447 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:55.447 [395/743] Generating lib/rte_rib_def with a custom command 00:02:55.447 [396/743] Linking static target lib/librte_rawdev.a 00:02:55.447 [397/743] Generating lib/rte_rib_mingw with a custom command 00:02:55.765 [398/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:55.765 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:55.765 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:55.765 [401/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:55.765 [402/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.765 [403/743] Linking static target lib/librte_dmadev.a 00:02:55.765 [404/743] Linking target lib/librte_pcapng.so.23.0 00:02:55.765 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:55.765 [406/743] Linking static target lib/librte_power.a 00:02:56.024 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:56.024 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.024 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:56.024 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:56.024 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:56.024 [412/743] Linking static target lib/librte_regexdev.a 00:02:56.024 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:56.024 [414/743] Generating lib/rte_sched_def with a custom command 00:02:56.282 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:56.282 [416/743] Generating lib/rte_sched_mingw with a custom command 00:02:56.282 [417/743] Generating lib/rte_security_def with a custom command 00:02:56.282 [418/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:56.282 [419/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:56.282 [420/743] Linking static target lib/librte_member.a 00:02:56.282 [421/743] Generating lib/rte_security_mingw with a custom command 00:02:56.282 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.282 [423/743] Linking target lib/librte_dmadev.so.23.0 00:02:56.282 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:56.282 [425/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:56.282 [426/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:56.282 [427/743] Linking static target lib/librte_reorder.a 00:02:56.282 [428/743] Generating lib/rte_stack_def with a custom command 00:02:56.541 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:56.541 [430/743] Generating lib/rte_stack_mingw with a custom command 00:02:56.541 [431/743] Linking static target lib/librte_stack.a 00:02:56.541 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:56.541 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.541 [434/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:56.541 [435/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.541 [436/743] Linking target lib/librte_member.so.23.0 00:02:56.541 [437/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.541 [438/743] Linking target lib/librte_reorder.so.23.0 00:02:56.799 [439/743] Linking target lib/librte_stack.so.23.0 00:02:56.799 [440/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:56.799 [441/743] Linking static target lib/librte_rib.a 00:02:56.799 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.799 [443/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.799 [444/743] Linking target lib/librte_power.so.23.0 00:02:56.799 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:57.058 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:57.058 [447/743] Linking static target lib/librte_security.a 00:02:57.058 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.058 [449/743] Linking target lib/librte_rib.so.23.0 00:02:57.316 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.316 [451/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:57.316 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:57.316 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:57.316 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:57.316 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.575 [456/743] Linking target lib/librte_security.so.23.0 00:02:57.575 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:57.575 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:57.575 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:57.833 [460/743] Linking static target lib/librte_sched.a 00:02:58.092 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.092 [462/743] Linking target lib/librte_sched.so.23.0 00:02:58.092 [463/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.092 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:58.092 [465/743] Generating lib/rte_ipsec_def with a custom command 00:02:58.092 [466/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:58.350 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:58.350 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:58.350 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:58.350 [470/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:58.350 [471/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:58.917 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:58.917 [473/743] Generating lib/rte_fib_def with a custom command 00:02:58.917 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:58.917 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:58.917 [476/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:58.917 [477/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:58.917 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:58.917 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:58.917 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:59.175 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:59.175 [482/743] Linking static target lib/librte_ipsec.a 00:02:59.434 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.434 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:59.434 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:59.692 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:59.692 [487/743] Linking static target lib/librte_fib.a 00:02:59.692 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:59.950 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:59.950 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:59.950 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:59.950 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.950 [493/743] Linking target lib/librte_fib.so.23.0 00:03:00.209 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:00.776 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:00.776 [496/743] Generating lib/rte_port_def with a custom command 00:03:00.776 [497/743] Generating lib/rte_port_mingw with a custom command 00:03:00.776 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:00.776 [499/743] Generating lib/rte_pdump_def with a custom command 00:03:00.776 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:03:00.776 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:00.776 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:01.034 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:01.034 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:01.034 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:01.034 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:01.034 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:01.293 [508/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:01.293 [509/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:01.293 [510/743] Linking static target lib/librte_port.a 00:03:01.552 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:01.552 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:01.811 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.811 [514/743] Linking target lib/librte_port.so.23.0 00:03:01.811 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:01.811 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:01.811 [517/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:01.811 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:01.811 [519/743] Linking static target lib/librte_pdump.a 00:03:02.070 [520/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:02.070 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.328 [522/743] Linking target lib/librte_pdump.so.23.0 00:03:02.328 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:02.328 [524/743] Generating lib/rte_table_def with a custom command 00:03:02.329 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:02.587 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:02.587 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:02.587 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:02.846 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:02.846 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:03.105 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:03.105 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:03.105 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:03.105 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:03.105 [535/743] Linking static target lib/librte_table.a 00:03:03.364 [536/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:03.364 [537/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:03.364 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:03.932 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.932 [540/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:03.932 [541/743] Linking target lib/librte_table.so.23.0 00:03:03.932 [542/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:04.190 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:04.190 [544/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:04.190 [545/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:04.190 [546/743] Generating lib/rte_graph_def with a custom command 00:03:04.190 [547/743] Generating lib/rte_graph_mingw with a custom command 00:03:04.190 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:04.755 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:04.756 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:04.756 [551/743] Linking static target lib/librte_graph.a 00:03:04.756 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:04.756 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:05.013 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:05.013 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:05.290 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:05.290 [557/743] Generating lib/rte_node_def with a custom command 00:03:05.290 [558/743] Generating lib/rte_node_mingw with a custom command 00:03:05.290 [559/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:05.290 [560/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:05.290 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.559 [562/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:05.559 [563/743] Linking target lib/librte_graph.so.23.0 00:03:05.559 [564/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:05.559 [565/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:05.559 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:05.559 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:05.559 [568/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:05.559 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:05.559 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:05.559 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:05.818 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:05.818 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.818 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:05.818 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:05.818 [576/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.818 [577/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.818 [578/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:06.077 [579/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:06.077 [580/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:06.077 [581/743] Linking static target lib/librte_node.a 00:03:06.077 [582/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.077 [583/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:06.077 [584/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:06.077 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.077 [586/743] Linking static target drivers/librte_bus_vdev.a 00:03:06.336 [587/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.336 [588/743] Linking target lib/librte_node.so.23.0 00:03:06.336 [589/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:06.336 [590/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.336 [591/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.336 [592/743] Linking static target drivers/librte_bus_pci.a 00:03:06.336 [593/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.336 [594/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.594 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:06.594 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:06.853 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.853 [598/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:06.853 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:06.853 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:06.853 [601/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:07.112 [602/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:07.112 [603/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:07.112 [604/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:07.112 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:07.112 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.112 [607/743] Linking static target drivers/librte_mempool_ring.a 00:03:07.112 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.372 [609/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:07.372 [610/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:07.939 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:07.939 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:08.506 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:08.506 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:08.506 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:08.764 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:08.764 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:09.023 [618/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:09.281 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:09.539 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:09.539 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:09.539 [622/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:09.539 [623/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:09.539 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:09.539 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:10.475 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:10.733 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:10.733 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:10.993 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:10.993 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:10.993 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:10.993 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:10.993 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:11.251 [634/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:11.509 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:11.509 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:11.768 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:12.027 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:12.027 [639/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:12.027 [640/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:12.027 [641/743] Linking static target lib/librte_vhost.a 00:03:12.027 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:12.286 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:12.286 [644/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:12.286 [645/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:12.286 [646/743] Linking static target drivers/librte_net_i40e.a 00:03:12.545 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:12.545 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:12.545 [649/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:12.803 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:12.803 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:13.062 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:13.062 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.062 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:13.062 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:13.062 [656/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.062 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:13.321 [658/743] Linking target lib/librte_vhost.so.23.0 00:03:13.579 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:13.838 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:13.838 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:13.838 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:13.838 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:14.096 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:14.096 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:14.096 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:14.096 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:14.096 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:14.096 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:14.722 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:14.722 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:14.980 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:14.980 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:15.238 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:15.238 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:15.496 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:15.755 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:15.755 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:16.013 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:16.013 [680/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:16.013 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:16.013 [682/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:16.271 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:16.271 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:16.530 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:16.530 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:16.530 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:16.788 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:16.788 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:16.788 [690/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:16.788 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:17.047 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:17.047 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:17.047 [694/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:17.613 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:17.613 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:17.613 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:17.872 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:17.872 [699/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:17.872 [700/743] Linking static target lib/librte_pipeline.a 00:03:17.872 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:18.439 [702/743] Linking target app/dpdk-dumpcap 00:03:18.439 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:18.439 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:18.439 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:18.698 [706/743] Linking target app/dpdk-pdump 00:03:18.698 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:18.698 [708/743] Linking target app/dpdk-proc-info 00:03:18.957 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:18.957 [710/743] Linking target app/dpdk-test-acl 00:03:18.957 [711/743] Linking target app/dpdk-test-bbdev 00:03:19.215 [712/743] Linking target app/dpdk-test-cmdline 00:03:19.215 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:19.215 [714/743] Linking target app/dpdk-test-compress-perf 00:03:19.215 [715/743] Linking target app/dpdk-test-crypto-perf 00:03:19.215 [716/743] Linking target app/dpdk-test-eventdev 00:03:19.215 [717/743] Linking target app/dpdk-test-flow-perf 00:03:19.215 [718/743] Linking target app/dpdk-test-fib 00:03:19.473 [719/743] Linking target app/dpdk-test-gpudev 00:03:19.473 [720/743] Linking target app/dpdk-test-pipeline 00:03:20.039 [721/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:20.039 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:20.039 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:20.039 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:20.298 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:20.298 [726/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.298 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:20.298 [728/743] Linking target lib/librte_pipeline.so.23.0 00:03:20.298 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:20.864 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:20.864 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:21.123 [732/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:21.123 [733/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:21.123 [734/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:21.382 [735/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:21.382 [736/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:21.382 [737/743] Linking target app/dpdk-test-sad 00:03:21.640 [738/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:21.640 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:21.640 [740/743] Linking target app/dpdk-test-regex 00:03:21.640 [741/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:21.899 [742/743] Linking target app/dpdk-test-security-perf 00:03:22.158 [743/743] Linking target app/dpdk-testpmd 00:03:22.158 12:48:02 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:22.158 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:22.158 [0/1] Installing files. 00:03:22.417 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:22.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.680 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.681 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.682 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:22.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:22.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:22.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:22.683 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.683 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.942 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.942 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.942 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.942 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.942 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:22.943 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:22.943 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:22.943 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.943 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:22.943 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.943 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.944 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.207 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.208 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.208 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.208 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:23.208 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:23.208 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:23.208 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:23.208 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:23.208 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:23.208 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:23.208 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:23.208 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:23.208 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:23.208 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:23.208 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:23.208 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:23.208 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:23.208 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:23.208 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:23.208 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:23.208 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:23.208 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:23.208 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:23.208 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:23.208 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:23.208 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:23.208 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:23.208 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:23.208 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:23.208 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:23.208 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:23.208 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:23.208 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:23.208 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:23.208 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:23.208 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:23.208 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:23.208 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:23.208 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:23.208 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:23.208 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:23.208 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:23.208 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:23.208 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:23.208 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:23.208 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:23.208 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:23.208 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:23.208 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:23.208 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:23.208 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:23.208 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:23.208 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:23.208 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:23.208 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:23.208 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:23.208 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:23.208 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:23.208 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:23.208 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:23.208 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:23.208 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:23.208 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:23.208 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:23.208 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:23.208 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:23.208 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:23.208 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:23.208 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:23.208 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:23.208 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:23.208 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:23.208 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:23.208 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:23.208 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:23.208 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:23.208 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:23.208 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:23.208 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:23.208 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:23.208 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:23.208 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:23.208 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:23.208 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:23.208 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:23.208 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:23.208 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:23.208 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:23.208 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:23.208 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:23.208 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:23.208 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:23.208 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:23.208 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:23.208 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:23.208 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:23.208 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:23.208 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:23.208 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:23.208 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:23.208 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:23.208 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:23.208 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:23.208 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:23.208 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:23.208 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:23.208 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:23.208 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:23.208 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:23.208 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:23.208 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:23.208 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:23.208 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:23.208 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:23.209 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:23.209 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:23.209 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:23.209 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:23.209 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:23.209 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:23.209 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:23.209 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:23.209 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:23.209 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:23.209 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:23.209 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:23.209 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:23.209 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:23.209 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:23.209 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:23.209 12:48:03 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:23.209 12:48:03 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:23.209 12:48:03 -- common/autobuild_common.sh@203 -- $ cat 00:03:23.209 12:48:03 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:23.209 00:03:23.209 real 0m50.191s 00:03:23.209 user 5m54.018s 00:03:23.209 sys 0m58.472s 00:03:23.209 12:48:03 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:23.209 12:48:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.209 ************************************ 00:03:23.209 END TEST build_native_dpdk 00:03:23.209 ************************************ 00:03:23.209 12:48:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:23.209 12:48:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:23.209 12:48:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:23.209 12:48:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:23.209 12:48:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:23.209 12:48:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:23.209 12:48:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:23.209 12:48:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:23.468 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:23.468 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.468 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:24.035 Using 'verbs' RDMA provider 00:03:39.487 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:51.692 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:51.692 go version go1.21.1 linux/amd64 00:03:51.692 Creating mk/config.mk...done. 00:03:51.692 Creating mk/cc.flags.mk...done. 00:03:51.692 Type 'make' to build. 00:03:51.692 12:48:31 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:51.692 12:48:31 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:51.692 12:48:31 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:51.692 12:48:31 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.692 ************************************ 00:03:51.692 START TEST make 00:03:51.692 ************************************ 00:03:51.692 12:48:31 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:51.692 make[1]: Nothing to be done for 'all'. 00:04:13.631 CC lib/log/log.o 00:04:13.631 CC lib/log/log_deprecated.o 00:04:13.631 CC lib/log/log_flags.o 00:04:13.631 CC lib/ut/ut.o 00:04:13.631 CC lib/ut_mock/mock.o 00:04:13.631 LIB libspdk_ut_mock.a 00:04:13.631 LIB libspdk_log.a 00:04:13.631 LIB libspdk_ut.a 00:04:13.631 SO libspdk_ut_mock.so.5.0 00:04:13.631 SO libspdk_log.so.6.1 00:04:13.631 SO libspdk_ut.so.1.0 00:04:13.631 SYMLINK libspdk_ut_mock.so 00:04:13.631 SYMLINK libspdk_ut.so 00:04:13.631 SYMLINK libspdk_log.so 00:04:13.889 CC lib/dma/dma.o 00:04:13.889 CC lib/util/base64.o 00:04:13.889 CC lib/ioat/ioat.o 00:04:13.889 CC lib/util/bit_array.o 00:04:13.889 CXX lib/trace_parser/trace.o 00:04:13.889 CC lib/util/cpuset.o 00:04:13.889 CC lib/util/crc16.o 00:04:13.889 CC lib/util/crc32.o 00:04:13.889 CC lib/util/crc32c.o 00:04:13.889 CC lib/vfio_user/host/vfio_user_pci.o 00:04:14.148 CC lib/util/crc32_ieee.o 00:04:14.148 CC lib/util/crc64.o 00:04:14.148 CC lib/util/dif.o 00:04:14.148 CC lib/util/fd.o 00:04:14.148 LIB libspdk_dma.a 00:04:14.148 CC lib/util/file.o 00:04:14.148 SO libspdk_dma.so.3.0 00:04:14.148 CC lib/util/hexlify.o 00:04:14.148 CC lib/util/iov.o 00:04:14.148 SYMLINK libspdk_dma.so 00:04:14.148 CC lib/util/math.o 00:04:14.148 CC lib/util/pipe.o 00:04:14.148 LIB libspdk_ioat.a 00:04:14.148 CC lib/util/strerror_tls.o 00:04:14.148 SO libspdk_ioat.so.6.0 00:04:14.148 CC lib/vfio_user/host/vfio_user.o 00:04:14.407 CC lib/util/string.o 00:04:14.407 SYMLINK libspdk_ioat.so 00:04:14.407 CC lib/util/uuid.o 00:04:14.407 CC lib/util/fd_group.o 00:04:14.407 CC lib/util/xor.o 00:04:14.407 CC lib/util/zipf.o 00:04:14.407 LIB libspdk_vfio_user.a 00:04:14.407 SO libspdk_vfio_user.so.4.0 00:04:14.665 SYMLINK libspdk_vfio_user.so 00:04:14.665 LIB libspdk_util.a 00:04:14.665 SO libspdk_util.so.8.0 00:04:14.923 SYMLINK libspdk_util.so 00:04:14.923 LIB libspdk_trace_parser.a 00:04:14.923 SO libspdk_trace_parser.so.4.0 00:04:14.923 CC lib/vmd/vmd.o 00:04:14.923 CC lib/vmd/led.o 00:04:14.923 CC lib/conf/conf.o 00:04:14.923 CC lib/json/json_parse.o 00:04:14.923 CC lib/rdma/common.o 00:04:14.923 CC lib/json/json_util.o 00:04:14.923 CC lib/env_dpdk/env.o 00:04:14.923 CC lib/env_dpdk/memory.o 00:04:14.923 CC lib/idxd/idxd.o 00:04:14.923 SYMLINK libspdk_trace_parser.so 00:04:14.923 CC lib/rdma/rdma_verbs.o 00:04:15.182 CC lib/json/json_write.o 00:04:15.182 LIB libspdk_conf.a 00:04:15.182 CC lib/idxd/idxd_user.o 00:04:15.182 CC lib/idxd/idxd_kernel.o 00:04:15.182 CC lib/env_dpdk/pci.o 00:04:15.182 SO libspdk_conf.so.5.0 00:04:15.182 LIB libspdk_rdma.a 00:04:15.182 SYMLINK libspdk_conf.so 00:04:15.182 CC lib/env_dpdk/init.o 00:04:15.182 SO libspdk_rdma.so.5.0 00:04:15.182 CC lib/env_dpdk/threads.o 00:04:15.440 SYMLINK libspdk_rdma.so 00:04:15.440 CC lib/env_dpdk/pci_ioat.o 00:04:15.440 CC lib/env_dpdk/pci_virtio.o 00:04:15.440 LIB libspdk_json.a 00:04:15.440 CC lib/env_dpdk/pci_vmd.o 00:04:15.440 SO libspdk_json.so.5.1 00:04:15.440 CC lib/env_dpdk/pci_idxd.o 00:04:15.440 CC lib/env_dpdk/pci_event.o 00:04:15.440 SYMLINK libspdk_json.so 00:04:15.440 CC lib/env_dpdk/sigbus_handler.o 00:04:15.440 LIB libspdk_idxd.a 00:04:15.440 CC lib/env_dpdk/pci_dpdk.o 00:04:15.440 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:15.441 SO libspdk_idxd.so.11.0 00:04:15.699 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:15.699 LIB libspdk_vmd.a 00:04:15.699 CC lib/jsonrpc/jsonrpc_server.o 00:04:15.699 SO libspdk_vmd.so.5.0 00:04:15.699 SYMLINK libspdk_idxd.so 00:04:15.699 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:15.699 CC lib/jsonrpc/jsonrpc_client.o 00:04:15.699 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:15.699 SYMLINK libspdk_vmd.so 00:04:15.957 LIB libspdk_jsonrpc.a 00:04:15.957 SO libspdk_jsonrpc.so.5.1 00:04:15.957 SYMLINK libspdk_jsonrpc.so 00:04:16.215 CC lib/rpc/rpc.o 00:04:16.215 LIB libspdk_env_dpdk.a 00:04:16.215 LIB libspdk_rpc.a 00:04:16.473 SO libspdk_env_dpdk.so.13.0 00:04:16.473 SO libspdk_rpc.so.5.0 00:04:16.473 SYMLINK libspdk_rpc.so 00:04:16.473 SYMLINK libspdk_env_dpdk.so 00:04:16.473 CC lib/trace/trace.o 00:04:16.473 CC lib/trace/trace_rpc.o 00:04:16.473 CC lib/trace/trace_flags.o 00:04:16.473 CC lib/sock/sock_rpc.o 00:04:16.473 CC lib/sock/sock.o 00:04:16.473 CC lib/notify/notify.o 00:04:16.473 CC lib/notify/notify_rpc.o 00:04:16.732 LIB libspdk_notify.a 00:04:16.732 SO libspdk_notify.so.5.0 00:04:16.732 LIB libspdk_trace.a 00:04:16.732 SO libspdk_trace.so.9.0 00:04:16.990 SYMLINK libspdk_notify.so 00:04:16.990 SYMLINK libspdk_trace.so 00:04:16.990 LIB libspdk_sock.a 00:04:16.990 SO libspdk_sock.so.8.0 00:04:16.990 SYMLINK libspdk_sock.so 00:04:16.990 CC lib/thread/thread.o 00:04:16.990 CC lib/thread/iobuf.o 00:04:17.249 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:17.249 CC lib/nvme/nvme_ctrlr.o 00:04:17.249 CC lib/nvme/nvme_ns_cmd.o 00:04:17.249 CC lib/nvme/nvme_fabric.o 00:04:17.249 CC lib/nvme/nvme_pcie.o 00:04:17.249 CC lib/nvme/nvme_pcie_common.o 00:04:17.249 CC lib/nvme/nvme_ns.o 00:04:17.249 CC lib/nvme/nvme_qpair.o 00:04:17.507 CC lib/nvme/nvme.o 00:04:18.073 CC lib/nvme/nvme_quirks.o 00:04:18.073 CC lib/nvme/nvme_transport.o 00:04:18.073 CC lib/nvme/nvme_discovery.o 00:04:18.073 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:18.073 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:18.331 CC lib/nvme/nvme_tcp.o 00:04:18.331 CC lib/nvme/nvme_opal.o 00:04:18.331 CC lib/nvme/nvme_io_msg.o 00:04:18.589 CC lib/nvme/nvme_poll_group.o 00:04:18.589 LIB libspdk_thread.a 00:04:18.589 SO libspdk_thread.so.9.0 00:04:18.589 SYMLINK libspdk_thread.so 00:04:18.589 CC lib/nvme/nvme_zns.o 00:04:18.589 CC lib/nvme/nvme_cuse.o 00:04:18.847 CC lib/nvme/nvme_vfio_user.o 00:04:18.847 CC lib/nvme/nvme_rdma.o 00:04:18.847 CC lib/accel/accel.o 00:04:18.847 CC lib/blob/blobstore.o 00:04:19.105 CC lib/blob/request.o 00:04:19.105 CC lib/blob/zeroes.o 00:04:19.363 CC lib/blob/blob_bs_dev.o 00:04:19.363 CC lib/accel/accel_rpc.o 00:04:19.363 CC lib/accel/accel_sw.o 00:04:19.363 CC lib/init/json_config.o 00:04:19.621 CC lib/init/subsystem.o 00:04:19.621 CC lib/virtio/virtio.o 00:04:19.621 CC lib/virtio/virtio_vhost_user.o 00:04:19.621 CC lib/virtio/virtio_vfio_user.o 00:04:19.621 CC lib/init/subsystem_rpc.o 00:04:19.621 CC lib/init/rpc.o 00:04:19.621 CC lib/virtio/virtio_pci.o 00:04:19.880 LIB libspdk_init.a 00:04:19.880 LIB libspdk_accel.a 00:04:19.880 SO libspdk_init.so.4.0 00:04:19.880 SO libspdk_accel.so.14.0 00:04:19.880 SYMLINK libspdk_init.so 00:04:19.880 SYMLINK libspdk_accel.so 00:04:19.880 LIB libspdk_virtio.a 00:04:20.140 SO libspdk_virtio.so.6.0 00:04:20.140 LIB libspdk_nvme.a 00:04:20.140 CC lib/bdev/bdev.o 00:04:20.140 CC lib/event/app.o 00:04:20.140 CC lib/event/reactor.o 00:04:20.140 CC lib/event/log_rpc.o 00:04:20.140 CC lib/bdev/bdev_zone.o 00:04:20.140 CC lib/bdev/bdev_rpc.o 00:04:20.140 CC lib/bdev/part.o 00:04:20.140 SYMLINK libspdk_virtio.so 00:04:20.140 CC lib/bdev/scsi_nvme.o 00:04:20.456 SO libspdk_nvme.so.12.0 00:04:20.456 CC lib/event/app_rpc.o 00:04:20.456 CC lib/event/scheduler_static.o 00:04:20.456 SYMLINK libspdk_nvme.so 00:04:20.456 LIB libspdk_event.a 00:04:20.721 SO libspdk_event.so.12.0 00:04:20.721 SYMLINK libspdk_event.so 00:04:21.657 LIB libspdk_blob.a 00:04:21.657 SO libspdk_blob.so.10.1 00:04:21.657 SYMLINK libspdk_blob.so 00:04:21.915 CC lib/lvol/lvol.o 00:04:21.915 CC lib/blobfs/blobfs.o 00:04:21.915 CC lib/blobfs/tree.o 00:04:22.482 LIB libspdk_bdev.a 00:04:22.482 SO libspdk_bdev.so.14.0 00:04:22.482 SYMLINK libspdk_bdev.so 00:04:22.741 LIB libspdk_blobfs.a 00:04:22.741 LIB libspdk_lvol.a 00:04:22.741 SO libspdk_blobfs.so.9.0 00:04:22.741 SO libspdk_lvol.so.9.1 00:04:22.741 CC lib/nbd/nbd.o 00:04:22.741 CC lib/nbd/nbd_rpc.o 00:04:22.741 CC lib/nvmf/ctrlr.o 00:04:22.741 CC lib/nvmf/ctrlr_discovery.o 00:04:22.741 CC lib/nvmf/ctrlr_bdev.o 00:04:22.741 CC lib/scsi/dev.o 00:04:22.741 CC lib/ftl/ftl_core.o 00:04:22.741 CC lib/ublk/ublk.o 00:04:22.741 SYMLINK libspdk_blobfs.so 00:04:22.741 CC lib/ublk/ublk_rpc.o 00:04:22.741 SYMLINK libspdk_lvol.so 00:04:22.741 CC lib/scsi/lun.o 00:04:22.999 CC lib/ftl/ftl_init.o 00:04:22.999 CC lib/ftl/ftl_layout.o 00:04:22.999 CC lib/ftl/ftl_debug.o 00:04:22.999 CC lib/ftl/ftl_io.o 00:04:22.999 CC lib/scsi/port.o 00:04:22.999 LIB libspdk_nbd.a 00:04:23.258 CC lib/ftl/ftl_sb.o 00:04:23.258 SO libspdk_nbd.so.6.0 00:04:23.258 CC lib/ftl/ftl_l2p.o 00:04:23.258 CC lib/ftl/ftl_l2p_flat.o 00:04:23.258 SYMLINK libspdk_nbd.so 00:04:23.258 CC lib/ftl/ftl_nv_cache.o 00:04:23.258 CC lib/scsi/scsi.o 00:04:23.258 CC lib/scsi/scsi_bdev.o 00:04:23.258 CC lib/ftl/ftl_band.o 00:04:23.258 CC lib/ftl/ftl_band_ops.o 00:04:23.258 CC lib/ftl/ftl_writer.o 00:04:23.258 LIB libspdk_ublk.a 00:04:23.516 CC lib/scsi/scsi_pr.o 00:04:23.516 SO libspdk_ublk.so.2.0 00:04:23.516 CC lib/ftl/ftl_rq.o 00:04:23.516 CC lib/nvmf/subsystem.o 00:04:23.516 SYMLINK libspdk_ublk.so 00:04:23.516 CC lib/nvmf/nvmf.o 00:04:23.516 CC lib/nvmf/nvmf_rpc.o 00:04:23.516 CC lib/ftl/ftl_reloc.o 00:04:23.774 CC lib/scsi/scsi_rpc.o 00:04:23.774 CC lib/ftl/ftl_l2p_cache.o 00:04:23.774 CC lib/scsi/task.o 00:04:23.774 CC lib/ftl/ftl_p2l.o 00:04:23.774 CC lib/ftl/mngt/ftl_mngt.o 00:04:24.033 LIB libspdk_scsi.a 00:04:24.033 CC lib/nvmf/transport.o 00:04:24.033 SO libspdk_scsi.so.8.0 00:04:24.033 SYMLINK libspdk_scsi.so 00:04:24.033 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:24.033 CC lib/nvmf/tcp.o 00:04:24.033 CC lib/nvmf/rdma.o 00:04:24.291 CC lib/iscsi/conn.o 00:04:24.291 CC lib/iscsi/init_grp.o 00:04:24.291 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:24.291 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:24.291 CC lib/iscsi/iscsi.o 00:04:24.291 CC lib/vhost/vhost.o 00:04:24.548 CC lib/vhost/vhost_rpc.o 00:04:24.548 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:24.548 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:24.548 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:24.548 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:24.806 CC lib/vhost/vhost_scsi.o 00:04:24.806 CC lib/vhost/vhost_blk.o 00:04:24.806 CC lib/vhost/rte_vhost_user.o 00:04:24.806 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:24.806 CC lib/iscsi/md5.o 00:04:25.064 CC lib/iscsi/param.o 00:04:25.064 CC lib/iscsi/portal_grp.o 00:04:25.064 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:25.064 CC lib/iscsi/tgt_node.o 00:04:25.322 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:25.322 CC lib/iscsi/iscsi_subsystem.o 00:04:25.322 CC lib/iscsi/iscsi_rpc.o 00:04:25.580 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:25.580 CC lib/iscsi/task.o 00:04:25.580 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:25.839 CC lib/ftl/utils/ftl_conf.o 00:04:25.839 CC lib/ftl/utils/ftl_md.o 00:04:25.839 CC lib/ftl/utils/ftl_mempool.o 00:04:25.839 CC lib/ftl/utils/ftl_bitmap.o 00:04:25.839 CC lib/ftl/utils/ftl_property.o 00:04:25.839 LIB libspdk_iscsi.a 00:04:25.839 LIB libspdk_vhost.a 00:04:25.839 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:25.839 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:25.839 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:25.839 SO libspdk_iscsi.so.7.0 00:04:25.839 SO libspdk_vhost.so.7.1 00:04:25.839 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:26.097 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:26.097 SYMLINK libspdk_vhost.so 00:04:26.097 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:26.097 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:26.097 SYMLINK libspdk_iscsi.so 00:04:26.097 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:26.097 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:26.097 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:26.097 CC lib/ftl/base/ftl_base_dev.o 00:04:26.097 CC lib/ftl/base/ftl_base_bdev.o 00:04:26.097 CC lib/ftl/ftl_trace.o 00:04:26.097 LIB libspdk_nvmf.a 00:04:26.356 SO libspdk_nvmf.so.17.0 00:04:26.356 LIB libspdk_ftl.a 00:04:26.356 SYMLINK libspdk_nvmf.so 00:04:26.614 SO libspdk_ftl.so.8.0 00:04:26.873 SYMLINK libspdk_ftl.so 00:04:27.131 CC module/env_dpdk/env_dpdk_rpc.o 00:04:27.131 CC module/sock/posix/posix.o 00:04:27.131 CC module/accel/iaa/accel_iaa.o 00:04:27.131 CC module/accel/error/accel_error.o 00:04:27.131 CC module/blob/bdev/blob_bdev.o 00:04:27.131 CC module/scheduler/gscheduler/gscheduler.o 00:04:27.131 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:27.131 CC module/accel/ioat/accel_ioat.o 00:04:27.131 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:27.131 CC module/accel/dsa/accel_dsa.o 00:04:27.131 LIB libspdk_env_dpdk_rpc.a 00:04:27.389 SO libspdk_env_dpdk_rpc.so.5.0 00:04:27.389 LIB libspdk_scheduler_gscheduler.a 00:04:27.389 LIB libspdk_scheduler_dpdk_governor.a 00:04:27.389 SYMLINK libspdk_env_dpdk_rpc.so 00:04:27.389 CC module/accel/ioat/accel_ioat_rpc.o 00:04:27.389 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:27.389 SO libspdk_scheduler_gscheduler.so.3.0 00:04:27.389 CC module/accel/error/accel_error_rpc.o 00:04:27.389 CC module/accel/iaa/accel_iaa_rpc.o 00:04:27.389 LIB libspdk_scheduler_dynamic.a 00:04:27.389 CC module/accel/dsa/accel_dsa_rpc.o 00:04:27.389 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:27.389 SYMLINK libspdk_scheduler_gscheduler.so 00:04:27.389 SO libspdk_scheduler_dynamic.so.3.0 00:04:27.389 LIB libspdk_blob_bdev.a 00:04:27.389 SYMLINK libspdk_scheduler_dynamic.so 00:04:27.389 SO libspdk_blob_bdev.so.10.1 00:04:27.389 LIB libspdk_accel_ioat.a 00:04:27.389 LIB libspdk_accel_error.a 00:04:27.389 SO libspdk_accel_ioat.so.5.0 00:04:27.389 LIB libspdk_accel_iaa.a 00:04:27.389 SYMLINK libspdk_blob_bdev.so 00:04:27.648 SO libspdk_accel_error.so.1.0 00:04:27.648 LIB libspdk_accel_dsa.a 00:04:27.648 SO libspdk_accel_iaa.so.2.0 00:04:27.648 SYMLINK libspdk_accel_ioat.so 00:04:27.648 SO libspdk_accel_dsa.so.4.0 00:04:27.648 SYMLINK libspdk_accel_error.so 00:04:27.648 SYMLINK libspdk_accel_iaa.so 00:04:27.648 SYMLINK libspdk_accel_dsa.so 00:04:27.648 CC module/bdev/gpt/gpt.o 00:04:27.648 CC module/bdev/malloc/bdev_malloc.o 00:04:27.648 CC module/bdev/error/vbdev_error.o 00:04:27.648 CC module/bdev/delay/vbdev_delay.o 00:04:27.648 CC module/bdev/lvol/vbdev_lvol.o 00:04:27.648 CC module/blobfs/bdev/blobfs_bdev.o 00:04:27.648 CC module/bdev/null/bdev_null.o 00:04:27.648 CC module/bdev/nvme/bdev_nvme.o 00:04:27.648 CC module/bdev/passthru/vbdev_passthru.o 00:04:27.906 LIB libspdk_sock_posix.a 00:04:27.906 CC module/bdev/gpt/vbdev_gpt.o 00:04:27.906 SO libspdk_sock_posix.so.5.0 00:04:27.906 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:27.906 CC module/bdev/error/vbdev_error_rpc.o 00:04:27.906 SYMLINK libspdk_sock_posix.so 00:04:27.906 CC module/bdev/null/bdev_null_rpc.o 00:04:27.906 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:28.164 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:28.164 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:28.164 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:28.164 LIB libspdk_blobfs_bdev.a 00:04:28.164 SO libspdk_blobfs_bdev.so.5.0 00:04:28.164 LIB libspdk_bdev_error.a 00:04:28.164 LIB libspdk_bdev_gpt.a 00:04:28.164 SO libspdk_bdev_error.so.5.0 00:04:28.164 LIB libspdk_bdev_passthru.a 00:04:28.164 LIB libspdk_bdev_null.a 00:04:28.164 SYMLINK libspdk_blobfs_bdev.so 00:04:28.164 SO libspdk_bdev_gpt.so.5.0 00:04:28.164 SO libspdk_bdev_passthru.so.5.0 00:04:28.164 SO libspdk_bdev_null.so.5.0 00:04:28.164 SYMLINK libspdk_bdev_error.so 00:04:28.164 LIB libspdk_bdev_malloc.a 00:04:28.164 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:28.164 LIB libspdk_bdev_delay.a 00:04:28.164 SYMLINK libspdk_bdev_gpt.so 00:04:28.164 SYMLINK libspdk_bdev_passthru.so 00:04:28.164 CC module/bdev/nvme/nvme_rpc.o 00:04:28.164 SYMLINK libspdk_bdev_null.so 00:04:28.164 SO libspdk_bdev_malloc.so.5.0 00:04:28.164 CC module/bdev/nvme/bdev_mdns_client.o 00:04:28.164 SO libspdk_bdev_delay.so.5.0 00:04:28.164 CC module/bdev/raid/bdev_raid.o 00:04:28.422 SYMLINK libspdk_bdev_malloc.so 00:04:28.422 CC module/bdev/split/vbdev_split.o 00:04:28.422 LIB libspdk_bdev_lvol.a 00:04:28.422 SYMLINK libspdk_bdev_delay.so 00:04:28.422 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:28.422 SO libspdk_bdev_lvol.so.5.0 00:04:28.422 CC module/bdev/aio/bdev_aio.o 00:04:28.422 CC module/bdev/ftl/bdev_ftl.o 00:04:28.422 SYMLINK libspdk_bdev_lvol.so 00:04:28.422 CC module/bdev/aio/bdev_aio_rpc.o 00:04:28.422 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:28.422 CC module/bdev/nvme/vbdev_opal.o 00:04:28.681 CC module/bdev/split/vbdev_split_rpc.o 00:04:28.681 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.681 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:28.681 LIB libspdk_bdev_zone_block.a 00:04:28.681 SO libspdk_bdev_zone_block.so.5.0 00:04:28.681 LIB libspdk_bdev_split.a 00:04:28.681 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:28.681 LIB libspdk_bdev_aio.a 00:04:28.681 SO libspdk_bdev_split.so.5.0 00:04:28.681 SO libspdk_bdev_aio.so.5.0 00:04:28.681 SYMLINK libspdk_bdev_zone_block.so 00:04:28.940 CC module/bdev/raid/bdev_raid_rpc.o 00:04:28.940 SYMLINK libspdk_bdev_split.so 00:04:28.940 CC module/bdev/raid/bdev_raid_sb.o 00:04:28.940 CC module/bdev/raid/raid0.o 00:04:28.940 CC module/bdev/raid/raid1.o 00:04:28.940 SYMLINK libspdk_bdev_aio.so 00:04:28.940 CC module/bdev/raid/concat.o 00:04:28.940 CC module/bdev/iscsi/bdev_iscsi.o 00:04:28.940 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:28.940 LIB libspdk_bdev_ftl.a 00:04:28.940 SO libspdk_bdev_ftl.so.5.0 00:04:28.940 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:28.940 SYMLINK libspdk_bdev_ftl.so 00:04:28.940 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:29.199 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:29.199 LIB libspdk_bdev_raid.a 00:04:29.199 SO libspdk_bdev_raid.so.5.0 00:04:29.199 LIB libspdk_bdev_iscsi.a 00:04:29.199 SO libspdk_bdev_iscsi.so.5.0 00:04:29.199 SYMLINK libspdk_bdev_raid.so 00:04:29.457 SYMLINK libspdk_bdev_iscsi.so 00:04:29.457 LIB libspdk_bdev_virtio.a 00:04:29.457 SO libspdk_bdev_virtio.so.5.0 00:04:29.731 SYMLINK libspdk_bdev_virtio.so 00:04:29.991 LIB libspdk_bdev_nvme.a 00:04:29.991 SO libspdk_bdev_nvme.so.6.0 00:04:29.991 SYMLINK libspdk_bdev_nvme.so 00:04:30.254 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:30.254 CC module/event/subsystems/iobuf/iobuf.o 00:04:30.254 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:30.254 CC module/event/subsystems/scheduler/scheduler.o 00:04:30.254 CC module/event/subsystems/sock/sock.o 00:04:30.254 CC module/event/subsystems/vmd/vmd.o 00:04:30.254 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:30.514 LIB libspdk_event_sock.a 00:04:30.514 LIB libspdk_event_iobuf.a 00:04:30.514 SO libspdk_event_sock.so.4.0 00:04:30.514 LIB libspdk_event_scheduler.a 00:04:30.514 SO libspdk_event_iobuf.so.2.0 00:04:30.514 LIB libspdk_event_vhost_blk.a 00:04:30.514 SO libspdk_event_scheduler.so.3.0 00:04:30.514 LIB libspdk_event_vmd.a 00:04:30.514 SYMLINK libspdk_event_sock.so 00:04:30.514 SO libspdk_event_vhost_blk.so.2.0 00:04:30.514 SYMLINK libspdk_event_iobuf.so 00:04:30.514 SO libspdk_event_vmd.so.5.0 00:04:30.514 SYMLINK libspdk_event_scheduler.so 00:04:30.514 SYMLINK libspdk_event_vhost_blk.so 00:04:30.772 SYMLINK libspdk_event_vmd.so 00:04:30.773 CC module/event/subsystems/accel/accel.o 00:04:30.773 LIB libspdk_event_accel.a 00:04:30.773 SO libspdk_event_accel.so.5.0 00:04:31.031 SYMLINK libspdk_event_accel.so 00:04:31.031 CC module/event/subsystems/bdev/bdev.o 00:04:31.289 LIB libspdk_event_bdev.a 00:04:31.289 SO libspdk_event_bdev.so.5.0 00:04:31.547 SYMLINK libspdk_event_bdev.so 00:04:31.547 CC module/event/subsystems/nbd/nbd.o 00:04:31.547 CC module/event/subsystems/scsi/scsi.o 00:04:31.547 CC module/event/subsystems/ublk/ublk.o 00:04:31.547 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:31.547 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:31.805 LIB libspdk_event_nbd.a 00:04:31.805 LIB libspdk_event_ublk.a 00:04:31.805 LIB libspdk_event_scsi.a 00:04:31.805 SO libspdk_event_nbd.so.5.0 00:04:31.805 SO libspdk_event_ublk.so.2.0 00:04:31.805 SO libspdk_event_scsi.so.5.0 00:04:31.805 SYMLINK libspdk_event_nbd.so 00:04:31.805 SYMLINK libspdk_event_ublk.so 00:04:31.805 SYMLINK libspdk_event_scsi.so 00:04:31.805 LIB libspdk_event_nvmf.a 00:04:31.805 SO libspdk_event_nvmf.so.5.0 00:04:32.064 SYMLINK libspdk_event_nvmf.so 00:04:32.064 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:32.064 CC module/event/subsystems/iscsi/iscsi.o 00:04:32.064 LIB libspdk_event_vhost_scsi.a 00:04:32.322 LIB libspdk_event_iscsi.a 00:04:32.322 SO libspdk_event_vhost_scsi.so.2.0 00:04:32.322 SO libspdk_event_iscsi.so.5.0 00:04:32.322 SYMLINK libspdk_event_vhost_scsi.so 00:04:32.322 SYMLINK libspdk_event_iscsi.so 00:04:32.322 SO libspdk.so.5.0 00:04:32.322 SYMLINK libspdk.so 00:04:32.580 CXX app/trace/trace.o 00:04:32.580 CC app/trace_record/trace_record.o 00:04:32.580 CC examples/sock/hello_world/hello_sock.o 00:04:32.580 CC examples/ioat/perf/perf.o 00:04:32.580 CC examples/nvme/hello_world/hello_world.o 00:04:32.580 CC examples/accel/perf/accel_perf.o 00:04:32.580 CC examples/vmd/lsvmd/lsvmd.o 00:04:32.581 CC examples/blob/hello_world/hello_blob.o 00:04:32.581 CC examples/bdev/hello_world/hello_bdev.o 00:04:32.581 CC test/accel/dif/dif.o 00:04:32.838 LINK lsvmd 00:04:32.838 LINK spdk_trace_record 00:04:32.838 LINK ioat_perf 00:04:32.838 LINK hello_sock 00:04:32.838 LINK hello_world 00:04:32.838 LINK hello_blob 00:04:33.095 LINK hello_bdev 00:04:33.095 LINK spdk_trace 00:04:33.095 CC examples/ioat/verify/verify.o 00:04:33.095 CC examples/vmd/led/led.o 00:04:33.095 LINK dif 00:04:33.095 LINK accel_perf 00:04:33.095 CC examples/nvme/reconnect/reconnect.o 00:04:33.095 CC app/nvmf_tgt/nvmf_main.o 00:04:33.095 CC examples/nvmf/nvmf/nvmf.o 00:04:33.095 LINK led 00:04:33.354 CC examples/blob/cli/blobcli.o 00:04:33.354 CC examples/bdev/bdevperf/bdevperf.o 00:04:33.354 LINK verify 00:04:33.354 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.354 LINK nvmf_tgt 00:04:33.354 CC app/spdk_lspci/spdk_lspci.o 00:04:33.354 CC app/spdk_tgt/spdk_tgt.o 00:04:33.354 LINK reconnect 00:04:33.354 CC test/app/bdev_svc/bdev_svc.o 00:04:33.612 LINK nvmf 00:04:33.612 CC app/spdk_nvme_perf/perf.o 00:04:33.612 LINK iscsi_tgt 00:04:33.612 LINK spdk_lspci 00:04:33.612 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:33.612 LINK spdk_tgt 00:04:33.612 LINK bdev_svc 00:04:33.612 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:33.612 LINK blobcli 00:04:33.870 CC test/app/histogram_perf/histogram_perf.o 00:04:33.870 CC app/spdk_nvme_identify/identify.o 00:04:33.870 CC test/bdev/bdevio/bdevio.o 00:04:33.870 CC examples/nvme/arbitration/arbitration.o 00:04:33.870 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:33.870 LINK histogram_perf 00:04:34.128 LINK bdevperf 00:04:34.128 LINK nvme_fuzz 00:04:34.128 CC test/blobfs/mkfs/mkfs.o 00:04:34.128 CC app/spdk_nvme_discover/discovery_aer.o 00:04:34.128 LINK nvme_manage 00:04:34.128 LINK arbitration 00:04:34.128 CC examples/nvme/hotplug/hotplug.o 00:04:34.386 LINK bdevio 00:04:34.386 LINK mkfs 00:04:34.386 LINK spdk_nvme_perf 00:04:34.386 CC examples/util/zipf/zipf.o 00:04:34.386 LINK spdk_nvme_discover 00:04:34.386 CC examples/thread/thread/thread_ex.o 00:04:34.386 LINK hotplug 00:04:34.386 CC examples/idxd/perf/perf.o 00:04:34.386 LINK zipf 00:04:34.644 CC test/app/jsoncat/jsoncat.o 00:04:34.644 LINK spdk_nvme_identify 00:04:34.644 CC test/app/stub/stub.o 00:04:34.644 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:34.644 LINK jsoncat 00:04:34.644 TEST_HEADER include/spdk/accel.h 00:04:34.644 TEST_HEADER include/spdk/accel_module.h 00:04:34.644 TEST_HEADER include/spdk/assert.h 00:04:34.644 TEST_HEADER include/spdk/barrier.h 00:04:34.644 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:34.644 TEST_HEADER include/spdk/base64.h 00:04:34.644 LINK stub 00:04:34.644 TEST_HEADER include/spdk/bdev.h 00:04:34.644 TEST_HEADER include/spdk/bdev_module.h 00:04:34.644 TEST_HEADER include/spdk/bdev_zone.h 00:04:34.644 TEST_HEADER include/spdk/bit_array.h 00:04:34.644 TEST_HEADER include/spdk/bit_pool.h 00:04:34.644 TEST_HEADER include/spdk/blob_bdev.h 00:04:34.644 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:34.644 TEST_HEADER include/spdk/blobfs.h 00:04:34.644 LINK interrupt_tgt 00:04:34.644 TEST_HEADER include/spdk/blob.h 00:04:34.644 TEST_HEADER include/spdk/conf.h 00:04:34.902 TEST_HEADER include/spdk/config.h 00:04:34.902 LINK thread 00:04:34.902 TEST_HEADER include/spdk/cpuset.h 00:04:34.902 TEST_HEADER include/spdk/crc16.h 00:04:34.902 TEST_HEADER include/spdk/crc32.h 00:04:34.902 TEST_HEADER include/spdk/crc64.h 00:04:34.902 CC app/spdk_top/spdk_top.o 00:04:34.902 TEST_HEADER include/spdk/dif.h 00:04:34.902 TEST_HEADER include/spdk/dma.h 00:04:34.902 TEST_HEADER include/spdk/endian.h 00:04:34.902 TEST_HEADER include/spdk/env_dpdk.h 00:04:34.902 TEST_HEADER include/spdk/env.h 00:04:34.902 TEST_HEADER include/spdk/event.h 00:04:34.902 TEST_HEADER include/spdk/fd_group.h 00:04:34.902 TEST_HEADER include/spdk/fd.h 00:04:34.902 TEST_HEADER include/spdk/file.h 00:04:34.902 TEST_HEADER include/spdk/ftl.h 00:04:34.903 TEST_HEADER include/spdk/gpt_spec.h 00:04:34.903 TEST_HEADER include/spdk/hexlify.h 00:04:34.903 TEST_HEADER include/spdk/histogram_data.h 00:04:34.903 TEST_HEADER include/spdk/idxd.h 00:04:34.903 TEST_HEADER include/spdk/idxd_spec.h 00:04:34.903 TEST_HEADER include/spdk/init.h 00:04:34.903 TEST_HEADER include/spdk/ioat.h 00:04:34.903 TEST_HEADER include/spdk/ioat_spec.h 00:04:34.903 TEST_HEADER include/spdk/iscsi_spec.h 00:04:34.903 TEST_HEADER include/spdk/json.h 00:04:34.903 TEST_HEADER include/spdk/jsonrpc.h 00:04:34.903 TEST_HEADER include/spdk/likely.h 00:04:34.903 TEST_HEADER include/spdk/log.h 00:04:34.903 TEST_HEADER include/spdk/lvol.h 00:04:34.903 TEST_HEADER include/spdk/memory.h 00:04:34.903 TEST_HEADER include/spdk/mmio.h 00:04:34.903 TEST_HEADER include/spdk/nbd.h 00:04:34.903 TEST_HEADER include/spdk/notify.h 00:04:34.903 TEST_HEADER include/spdk/nvme.h 00:04:34.903 TEST_HEADER include/spdk/nvme_intel.h 00:04:34.903 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:34.903 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:34.903 TEST_HEADER include/spdk/nvme_spec.h 00:04:34.903 TEST_HEADER include/spdk/nvme_zns.h 00:04:34.903 LINK idxd_perf 00:04:34.903 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:34.903 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:34.903 TEST_HEADER include/spdk/nvmf.h 00:04:34.903 TEST_HEADER include/spdk/nvmf_spec.h 00:04:34.903 TEST_HEADER include/spdk/nvmf_transport.h 00:04:34.903 TEST_HEADER include/spdk/opal.h 00:04:34.903 TEST_HEADER include/spdk/opal_spec.h 00:04:34.903 TEST_HEADER include/spdk/pci_ids.h 00:04:34.903 TEST_HEADER include/spdk/pipe.h 00:04:34.903 TEST_HEADER include/spdk/queue.h 00:04:34.903 TEST_HEADER include/spdk/reduce.h 00:04:34.903 TEST_HEADER include/spdk/rpc.h 00:04:34.903 TEST_HEADER include/spdk/scheduler.h 00:04:34.903 TEST_HEADER include/spdk/scsi.h 00:04:34.903 TEST_HEADER include/spdk/scsi_spec.h 00:04:34.903 TEST_HEADER include/spdk/sock.h 00:04:34.903 TEST_HEADER include/spdk/stdinc.h 00:04:34.903 TEST_HEADER include/spdk/string.h 00:04:34.903 TEST_HEADER include/spdk/thread.h 00:04:34.903 TEST_HEADER include/spdk/trace.h 00:04:34.903 TEST_HEADER include/spdk/trace_parser.h 00:04:34.903 TEST_HEADER include/spdk/tree.h 00:04:34.903 TEST_HEADER include/spdk/ublk.h 00:04:34.903 TEST_HEADER include/spdk/util.h 00:04:34.903 TEST_HEADER include/spdk/uuid.h 00:04:34.903 TEST_HEADER include/spdk/version.h 00:04:34.903 LINK cmb_copy 00:04:34.903 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:34.903 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:34.903 TEST_HEADER include/spdk/vhost.h 00:04:34.903 TEST_HEADER include/spdk/vmd.h 00:04:34.903 TEST_HEADER include/spdk/xor.h 00:04:34.903 TEST_HEADER include/spdk/zipf.h 00:04:34.903 CXX test/cpp_headers/accel.o 00:04:34.903 CXX test/cpp_headers/accel_module.o 00:04:34.903 CC test/dma/test_dma/test_dma.o 00:04:34.903 CC test/env/vtophys/vtophys.o 00:04:35.161 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:35.161 CC test/env/mem_callbacks/mem_callbacks.o 00:04:35.161 CXX test/cpp_headers/assert.o 00:04:35.161 CC examples/nvme/abort/abort.o 00:04:35.161 CC test/env/memory/memory_ut.o 00:04:35.161 LINK vtophys 00:04:35.161 LINK env_dpdk_post_init 00:04:35.161 LINK mem_callbacks 00:04:35.161 CXX test/cpp_headers/barrier.o 00:04:35.419 LINK test_dma 00:04:35.419 CC test/event/event_perf/event_perf.o 00:04:35.419 CXX test/cpp_headers/base64.o 00:04:35.419 CC test/nvme/aer/aer.o 00:04:35.419 LINK abort 00:04:35.677 LINK iscsi_fuzz 00:04:35.677 CC test/lvol/esnap/esnap.o 00:04:35.677 LINK event_perf 00:04:35.677 CXX test/cpp_headers/bdev.o 00:04:35.677 LINK memory_ut 00:04:35.677 CC test/env/pci/pci_ut.o 00:04:35.677 LINK spdk_top 00:04:35.936 LINK aer 00:04:35.936 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:35.936 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:35.936 CC test/event/reactor/reactor.o 00:04:35.936 CXX test/cpp_headers/bdev_module.o 00:04:35.936 CC test/event/reactor_perf/reactor_perf.o 00:04:35.936 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:35.936 LINK pmr_persistence 00:04:35.936 CC app/vhost/vhost.o 00:04:36.194 LINK reactor 00:04:36.194 CC test/nvme/reset/reset.o 00:04:36.194 CXX test/cpp_headers/bdev_zone.o 00:04:36.194 LINK reactor_perf 00:04:36.194 LINK pci_ut 00:04:36.194 CXX test/cpp_headers/bit_array.o 00:04:36.194 LINK vhost 00:04:36.194 CC test/event/app_repeat/app_repeat.o 00:04:36.452 LINK reset 00:04:36.452 CXX test/cpp_headers/bit_pool.o 00:04:36.452 CC test/event/scheduler/scheduler.o 00:04:36.452 CC test/rpc_client/rpc_client_test.o 00:04:36.452 LINK app_repeat 00:04:36.452 LINK vhost_fuzz 00:04:36.452 CC test/thread/poller_perf/poller_perf.o 00:04:36.452 CC app/spdk_dd/spdk_dd.o 00:04:36.452 CXX test/cpp_headers/blob_bdev.o 00:04:36.710 CC test/nvme/sgl/sgl.o 00:04:36.710 LINK rpc_client_test 00:04:36.710 LINK scheduler 00:04:36.710 CC test/nvme/e2edp/nvme_dp.o 00:04:36.710 CC test/nvme/overhead/overhead.o 00:04:36.710 LINK poller_perf 00:04:36.710 CXX test/cpp_headers/blobfs_bdev.o 00:04:36.969 CC test/nvme/err_injection/err_injection.o 00:04:36.969 LINK sgl 00:04:36.969 LINK spdk_dd 00:04:36.969 CC test/nvme/startup/startup.o 00:04:36.969 CXX test/cpp_headers/blobfs.o 00:04:36.969 CC test/nvme/reserve/reserve.o 00:04:36.969 LINK overhead 00:04:36.969 LINK nvme_dp 00:04:36.969 LINK err_injection 00:04:37.227 CC test/nvme/simple_copy/simple_copy.o 00:04:37.227 CXX test/cpp_headers/blob.o 00:04:37.227 LINK startup 00:04:37.227 LINK reserve 00:04:37.227 CXX test/cpp_headers/conf.o 00:04:37.227 CC test/nvme/connect_stress/connect_stress.o 00:04:37.227 CC app/fio/nvme/fio_plugin.o 00:04:37.486 CC test/nvme/boot_partition/boot_partition.o 00:04:37.486 LINK simple_copy 00:04:37.486 CC test/nvme/compliance/nvme_compliance.o 00:04:37.486 CC test/nvme/fused_ordering/fused_ordering.o 00:04:37.486 CXX test/cpp_headers/config.o 00:04:37.486 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:37.486 CXX test/cpp_headers/cpuset.o 00:04:37.486 LINK boot_partition 00:04:37.745 LINK connect_stress 00:04:37.745 CC test/nvme/fdp/fdp.o 00:04:37.745 CXX test/cpp_headers/crc16.o 00:04:37.745 CC test/nvme/cuse/cuse.o 00:04:37.745 LINK doorbell_aers 00:04:37.745 LINK fused_ordering 00:04:37.745 CXX test/cpp_headers/crc32.o 00:04:37.745 LINK nvme_compliance 00:04:38.003 CXX test/cpp_headers/crc64.o 00:04:38.003 CXX test/cpp_headers/dif.o 00:04:38.003 LINK spdk_nvme 00:04:38.003 CXX test/cpp_headers/dma.o 00:04:38.003 LINK fdp 00:04:38.003 CXX test/cpp_headers/endian.o 00:04:38.003 CC app/fio/bdev/fio_plugin.o 00:04:38.003 CXX test/cpp_headers/env_dpdk.o 00:04:38.003 CXX test/cpp_headers/env.o 00:04:38.003 CXX test/cpp_headers/event.o 00:04:38.003 CXX test/cpp_headers/fd_group.o 00:04:38.261 CXX test/cpp_headers/fd.o 00:04:38.261 CXX test/cpp_headers/file.o 00:04:38.261 CXX test/cpp_headers/ftl.o 00:04:38.261 CXX test/cpp_headers/gpt_spec.o 00:04:38.261 CXX test/cpp_headers/hexlify.o 00:04:38.261 CXX test/cpp_headers/histogram_data.o 00:04:38.519 CXX test/cpp_headers/idxd.o 00:04:38.519 CXX test/cpp_headers/idxd_spec.o 00:04:38.519 CXX test/cpp_headers/init.o 00:04:38.519 CXX test/cpp_headers/ioat.o 00:04:38.777 CXX test/cpp_headers/ioat_spec.o 00:04:38.777 CXX test/cpp_headers/iscsi_spec.o 00:04:38.777 CXX test/cpp_headers/json.o 00:04:38.777 CXX test/cpp_headers/jsonrpc.o 00:04:38.777 CXX test/cpp_headers/likely.o 00:04:38.777 LINK spdk_bdev 00:04:38.777 CXX test/cpp_headers/log.o 00:04:38.777 CXX test/cpp_headers/lvol.o 00:04:38.777 CXX test/cpp_headers/memory.o 00:04:38.777 CXX test/cpp_headers/mmio.o 00:04:38.777 CXX test/cpp_headers/nbd.o 00:04:39.035 CXX test/cpp_headers/notify.o 00:04:39.035 CXX test/cpp_headers/nvme.o 00:04:39.035 CXX test/cpp_headers/nvme_intel.o 00:04:39.035 CXX test/cpp_headers/nvme_ocssd.o 00:04:39.035 LINK cuse 00:04:39.035 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:39.035 CXX test/cpp_headers/nvme_spec.o 00:04:39.035 CXX test/cpp_headers/nvme_zns.o 00:04:39.035 CXX test/cpp_headers/nvmf_cmd.o 00:04:39.293 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:39.293 CXX test/cpp_headers/nvmf.o 00:04:39.293 CXX test/cpp_headers/nvmf_spec.o 00:04:39.293 CXX test/cpp_headers/nvmf_transport.o 00:04:39.293 CXX test/cpp_headers/opal.o 00:04:39.293 CXX test/cpp_headers/opal_spec.o 00:04:39.293 CXX test/cpp_headers/pci_ids.o 00:04:39.293 CXX test/cpp_headers/pipe.o 00:04:39.293 CXX test/cpp_headers/queue.o 00:04:39.551 CXX test/cpp_headers/reduce.o 00:04:39.551 CXX test/cpp_headers/rpc.o 00:04:39.551 CXX test/cpp_headers/scheduler.o 00:04:39.551 CXX test/cpp_headers/scsi.o 00:04:39.551 CXX test/cpp_headers/scsi_spec.o 00:04:39.551 CXX test/cpp_headers/sock.o 00:04:39.551 CXX test/cpp_headers/stdinc.o 00:04:39.551 CXX test/cpp_headers/string.o 00:04:39.551 CXX test/cpp_headers/thread.o 00:04:39.809 CXX test/cpp_headers/trace.o 00:04:39.809 CXX test/cpp_headers/trace_parser.o 00:04:39.809 CXX test/cpp_headers/tree.o 00:04:39.809 CXX test/cpp_headers/ublk.o 00:04:39.809 CXX test/cpp_headers/util.o 00:04:39.809 CXX test/cpp_headers/uuid.o 00:04:39.809 CXX test/cpp_headers/version.o 00:04:39.809 CXX test/cpp_headers/vfio_user_pci.o 00:04:39.809 CXX test/cpp_headers/vfio_user_spec.o 00:04:39.809 CXX test/cpp_headers/vhost.o 00:04:39.809 CXX test/cpp_headers/vmd.o 00:04:40.067 CXX test/cpp_headers/xor.o 00:04:40.067 CXX test/cpp_headers/zipf.o 00:04:40.326 LINK esnap 00:04:41.263 00:04:41.263 real 0m50.817s 00:04:41.263 user 4m59.401s 00:04:41.263 sys 1m3.268s 00:04:41.263 12:49:21 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:41.263 12:49:21 -- common/autotest_common.sh@10 -- $ set +x 00:04:41.263 ************************************ 00:04:41.263 END TEST make 00:04:41.263 ************************************ 00:04:41.522 12:49:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.522 12:49:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.522 12:49:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.522 12:49:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.522 12:49:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.522 12:49:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.522 12:49:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.522 12:49:22 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.522 12:49:22 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.522 12:49:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.522 12:49:22 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.522 12:49:22 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.522 12:49:22 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.522 12:49:22 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.522 12:49:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.522 12:49:22 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.522 12:49:22 -- scripts/common.sh@344 -- # : 1 00:04:41.522 12:49:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.522 12:49:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.522 12:49:22 -- scripts/common.sh@364 -- # decimal 1 00:04:41.522 12:49:22 -- scripts/common.sh@352 -- # local d=1 00:04:41.522 12:49:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.522 12:49:22 -- scripts/common.sh@354 -- # echo 1 00:04:41.522 12:49:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.522 12:49:22 -- scripts/common.sh@365 -- # decimal 2 00:04:41.522 12:49:22 -- scripts/common.sh@352 -- # local d=2 00:04:41.522 12:49:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.522 12:49:22 -- scripts/common.sh@354 -- # echo 2 00:04:41.522 12:49:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.522 12:49:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.522 12:49:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.522 12:49:22 -- scripts/common.sh@367 -- # return 0 00:04:41.522 12:49:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.522 12:49:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.522 --rc genhtml_branch_coverage=1 00:04:41.522 --rc genhtml_function_coverage=1 00:04:41.522 --rc genhtml_legend=1 00:04:41.522 --rc geninfo_all_blocks=1 00:04:41.522 --rc geninfo_unexecuted_blocks=1 00:04:41.522 00:04:41.522 ' 00:04:41.522 12:49:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.522 --rc genhtml_branch_coverage=1 00:04:41.522 --rc genhtml_function_coverage=1 00:04:41.522 --rc genhtml_legend=1 00:04:41.522 --rc geninfo_all_blocks=1 00:04:41.522 --rc geninfo_unexecuted_blocks=1 00:04:41.522 00:04:41.522 ' 00:04:41.522 12:49:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.522 --rc genhtml_branch_coverage=1 00:04:41.522 --rc genhtml_function_coverage=1 00:04:41.522 --rc genhtml_legend=1 00:04:41.522 --rc geninfo_all_blocks=1 00:04:41.522 --rc geninfo_unexecuted_blocks=1 00:04:41.522 00:04:41.522 ' 00:04:41.522 12:49:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.522 --rc genhtml_branch_coverage=1 00:04:41.522 --rc genhtml_function_coverage=1 00:04:41.522 --rc genhtml_legend=1 00:04:41.522 --rc geninfo_all_blocks=1 00:04:41.522 --rc geninfo_unexecuted_blocks=1 00:04:41.522 00:04:41.522 ' 00:04:41.522 12:49:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.522 12:49:22 -- nvmf/common.sh@7 -- # uname -s 00:04:41.522 12:49:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.522 12:49:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.522 12:49:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.522 12:49:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.522 12:49:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.522 12:49:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.522 12:49:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.522 12:49:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.522 12:49:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.522 12:49:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.522 12:49:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:04:41.522 12:49:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:04:41.522 12:49:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.522 12:49:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.522 12:49:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:41.522 12:49:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.522 12:49:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.522 12:49:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.522 12:49:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.522 12:49:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.522 12:49:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.522 12:49:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.522 12:49:22 -- paths/export.sh@5 -- # export PATH 00:04:41.522 12:49:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.522 12:49:22 -- nvmf/common.sh@46 -- # : 0 00:04:41.522 12:49:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:41.522 12:49:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:41.522 12:49:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:41.522 12:49:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.522 12:49:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.522 12:49:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:41.522 12:49:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:41.522 12:49:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:41.522 12:49:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:41.522 12:49:22 -- spdk/autotest.sh@32 -- # uname -s 00:04:41.522 12:49:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:41.522 12:49:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:41.522 12:49:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.522 12:49:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:41.522 12:49:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.522 12:49:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:41.781 12:49:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:41.781 12:49:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:41.781 12:49:22 -- spdk/autotest.sh@48 -- # udevadm_pid=61507 00:04:41.781 12:49:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:41.781 12:49:22 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.781 12:49:22 -- spdk/autotest.sh@54 -- # echo 61509 00:04:41.781 12:49:22 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.781 12:49:22 -- spdk/autotest.sh@56 -- # echo 61510 00:04:41.781 12:49:22 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.781 12:49:22 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:41.781 12:49:22 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:41.781 12:49:22 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:41.781 12:49:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.781 12:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.781 12:49:22 -- spdk/autotest.sh@70 -- # create_test_list 00:04:41.781 12:49:22 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:41.781 12:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.781 12:49:22 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:41.781 12:49:22 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:41.781 12:49:22 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:41.781 12:49:22 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:41.781 12:49:22 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:41.781 12:49:22 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:41.781 12:49:22 -- common/autotest_common.sh@1450 -- # uname 00:04:41.781 12:49:22 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:41.781 12:49:22 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:41.781 12:49:22 -- common/autotest_common.sh@1470 -- # uname 00:04:41.781 12:49:22 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:41.781 12:49:22 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:41.781 12:49:22 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:41.781 lcov: LCOV version 1.15 00:04:41.781 12:49:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:48.349 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:48.349 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:48.349 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:48.349 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:48.349 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:48.349 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:06.431 12:49:46 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:06.431 12:49:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.431 12:49:46 -- common/autotest_common.sh@10 -- # set +x 00:05:06.431 12:49:46 -- spdk/autotest.sh@89 -- # rm -f 00:05:06.431 12:49:46 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.431 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:06.431 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:06.431 12:49:46 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:06.431 12:49:46 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:06.431 12:49:46 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:06.431 12:49:46 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:06.431 12:49:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:06.431 12:49:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:06.431 12:49:46 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:06.431 12:49:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.431 12:49:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:06.431 12:49:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:06.431 12:49:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:06.431 12:49:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:06.431 12:49:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:06.431 12:49:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:06.431 12:49:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:06.431 12:49:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:06.431 12:49:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:06.431 12:49:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:06.431 12:49:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:06.431 12:49:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:06.431 12:49:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:06.431 12:49:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:06.431 12:49:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:06.431 12:49:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:06.431 12:49:46 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:06.431 12:49:46 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:06.431 12:49:46 -- spdk/autotest.sh@108 -- # grep -v p 00:05:06.431 12:49:46 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:06.431 12:49:46 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:06.431 12:49:46 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:06.431 12:49:46 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:06.431 12:49:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:06.431 No valid GPT data, bailing 00:05:06.431 12:49:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:06.431 12:49:46 -- scripts/common.sh@393 -- # pt= 00:05:06.431 12:49:46 -- scripts/common.sh@394 -- # return 1 00:05:06.431 12:49:46 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:06.431 1+0 records in 00:05:06.431 1+0 records out 00:05:06.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389067 s, 270 MB/s 00:05:06.431 12:49:46 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:06.431 12:49:46 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:06.431 12:49:46 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:06.431 12:49:46 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:06.431 12:49:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:06.431 No valid GPT data, bailing 00:05:06.431 12:49:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:06.431 12:49:47 -- scripts/common.sh@393 -- # pt= 00:05:06.431 12:49:47 -- scripts/common.sh@394 -- # return 1 00:05:06.431 12:49:47 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:06.431 1+0 records in 00:05:06.431 1+0 records out 00:05:06.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469881 s, 223 MB/s 00:05:06.431 12:49:47 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:06.431 12:49:47 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:06.431 12:49:47 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:06.431 12:49:47 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:06.431 12:49:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:06.431 No valid GPT data, bailing 00:05:06.431 12:49:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:06.431 12:49:47 -- scripts/common.sh@393 -- # pt= 00:05:06.431 12:49:47 -- scripts/common.sh@394 -- # return 1 00:05:06.431 12:49:47 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:06.431 1+0 records in 00:05:06.431 1+0 records out 00:05:06.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467509 s, 224 MB/s 00:05:06.431 12:49:47 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:06.431 12:49:47 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:06.431 12:49:47 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:06.431 12:49:47 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:06.431 12:49:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:06.431 No valid GPT data, bailing 00:05:06.431 12:49:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:06.431 12:49:47 -- scripts/common.sh@393 -- # pt= 00:05:06.431 12:49:47 -- scripts/common.sh@394 -- # return 1 00:05:06.431 12:49:47 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:06.431 1+0 records in 00:05:06.431 1+0 records out 00:05:06.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478381 s, 219 MB/s 00:05:06.431 12:49:47 -- spdk/autotest.sh@116 -- # sync 00:05:07.011 12:49:47 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:07.011 12:49:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:07.011 12:49:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:08.927 12:49:49 -- spdk/autotest.sh@122 -- # uname -s 00:05:08.927 12:49:49 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:08.927 12:49:49 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:08.927 12:49:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.927 12:49:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.927 12:49:49 -- common/autotest_common.sh@10 -- # set +x 00:05:08.927 ************************************ 00:05:08.927 START TEST setup.sh 00:05:08.927 ************************************ 00:05:08.927 12:49:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:08.927 * Looking for test storage... 00:05:08.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:08.927 12:49:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:08.927 12:49:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:08.927 12:49:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:08.927 12:49:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:08.927 12:49:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:08.927 12:49:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:08.927 12:49:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:08.927 12:49:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:08.927 12:49:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:08.927 12:49:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.927 12:49:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:08.927 12:49:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:08.927 12:49:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:08.927 12:49:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:08.927 12:49:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:08.927 12:49:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:08.927 12:49:49 -- scripts/common.sh@344 -- # : 1 00:05:08.927 12:49:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:08.927 12:49:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.927 12:49:49 -- scripts/common.sh@364 -- # decimal 1 00:05:08.927 12:49:49 -- scripts/common.sh@352 -- # local d=1 00:05:08.927 12:49:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.927 12:49:49 -- scripts/common.sh@354 -- # echo 1 00:05:08.927 12:49:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:08.927 12:49:49 -- scripts/common.sh@365 -- # decimal 2 00:05:08.927 12:49:49 -- scripts/common.sh@352 -- # local d=2 00:05:08.927 12:49:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.927 12:49:49 -- scripts/common.sh@354 -- # echo 2 00:05:08.927 12:49:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:08.927 12:49:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:08.927 12:49:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:08.927 12:49:49 -- scripts/common.sh@367 -- # return 0 00:05:08.927 12:49:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.927 12:49:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:08.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.927 --rc genhtml_branch_coverage=1 00:05:08.927 --rc genhtml_function_coverage=1 00:05:08.927 --rc genhtml_legend=1 00:05:08.927 --rc geninfo_all_blocks=1 00:05:08.927 --rc geninfo_unexecuted_blocks=1 00:05:08.927 00:05:08.927 ' 00:05:08.927 12:49:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:08.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.927 --rc genhtml_branch_coverage=1 00:05:08.927 --rc genhtml_function_coverage=1 00:05:08.927 --rc genhtml_legend=1 00:05:08.927 --rc geninfo_all_blocks=1 00:05:08.927 --rc geninfo_unexecuted_blocks=1 00:05:08.927 00:05:08.927 ' 00:05:08.927 12:49:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:08.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.927 --rc genhtml_branch_coverage=1 00:05:08.927 --rc genhtml_function_coverage=1 00:05:08.927 --rc genhtml_legend=1 00:05:08.927 --rc geninfo_all_blocks=1 00:05:08.927 --rc geninfo_unexecuted_blocks=1 00:05:08.927 00:05:08.927 ' 00:05:08.927 12:49:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:08.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.927 --rc genhtml_branch_coverage=1 00:05:08.927 --rc genhtml_function_coverage=1 00:05:08.927 --rc genhtml_legend=1 00:05:08.927 --rc geninfo_all_blocks=1 00:05:08.927 --rc geninfo_unexecuted_blocks=1 00:05:08.927 00:05:08.927 ' 00:05:08.927 12:49:49 -- setup/test-setup.sh@10 -- # uname -s 00:05:08.927 12:49:49 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:08.927 12:49:49 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:08.927 12:49:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.927 12:49:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.927 12:49:49 -- common/autotest_common.sh@10 -- # set +x 00:05:08.927 ************************************ 00:05:08.927 START TEST acl 00:05:08.927 ************************************ 00:05:08.927 12:49:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:09.186 * Looking for test storage... 00:05:09.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:09.186 12:49:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.186 12:49:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.186 12:49:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.186 12:49:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.186 12:49:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.186 12:49:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.186 12:49:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.186 12:49:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.186 12:49:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.186 12:49:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.186 12:49:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.186 12:49:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.186 12:49:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.186 12:49:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.186 12:49:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.186 12:49:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.186 12:49:49 -- scripts/common.sh@344 -- # : 1 00:05:09.186 12:49:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.186 12:49:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.186 12:49:49 -- scripts/common.sh@364 -- # decimal 1 00:05:09.186 12:49:49 -- scripts/common.sh@352 -- # local d=1 00:05:09.186 12:49:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.186 12:49:49 -- scripts/common.sh@354 -- # echo 1 00:05:09.186 12:49:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.186 12:49:49 -- scripts/common.sh@365 -- # decimal 2 00:05:09.186 12:49:49 -- scripts/common.sh@352 -- # local d=2 00:05:09.186 12:49:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.186 12:49:49 -- scripts/common.sh@354 -- # echo 2 00:05:09.186 12:49:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.186 12:49:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.186 12:49:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.186 12:49:49 -- scripts/common.sh@367 -- # return 0 00:05:09.186 12:49:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.186 12:49:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.186 --rc genhtml_branch_coverage=1 00:05:09.186 --rc genhtml_function_coverage=1 00:05:09.186 --rc genhtml_legend=1 00:05:09.186 --rc geninfo_all_blocks=1 00:05:09.186 --rc geninfo_unexecuted_blocks=1 00:05:09.186 00:05:09.186 ' 00:05:09.186 12:49:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.186 --rc genhtml_branch_coverage=1 00:05:09.186 --rc genhtml_function_coverage=1 00:05:09.186 --rc genhtml_legend=1 00:05:09.186 --rc geninfo_all_blocks=1 00:05:09.186 --rc geninfo_unexecuted_blocks=1 00:05:09.186 00:05:09.186 ' 00:05:09.186 12:49:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.186 --rc genhtml_branch_coverage=1 00:05:09.186 --rc genhtml_function_coverage=1 00:05:09.186 --rc genhtml_legend=1 00:05:09.186 --rc geninfo_all_blocks=1 00:05:09.186 --rc geninfo_unexecuted_blocks=1 00:05:09.186 00:05:09.186 ' 00:05:09.186 12:49:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.186 --rc genhtml_branch_coverage=1 00:05:09.186 --rc genhtml_function_coverage=1 00:05:09.186 --rc genhtml_legend=1 00:05:09.186 --rc geninfo_all_blocks=1 00:05:09.186 --rc geninfo_unexecuted_blocks=1 00:05:09.186 00:05:09.186 ' 00:05:09.186 12:49:49 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:09.186 12:49:49 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:09.186 12:49:49 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:09.186 12:49:49 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:09.186 12:49:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:09.186 12:49:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:09.186 12:49:49 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:09.186 12:49:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.186 12:49:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:09.186 12:49:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:09.186 12:49:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:09.186 12:49:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:09.186 12:49:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:09.186 12:49:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:09.186 12:49:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:09.186 12:49:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:09.187 12:49:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:09.187 12:49:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:09.187 12:49:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:09.187 12:49:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:09.187 12:49:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:09.187 12:49:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:09.187 12:49:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:09.187 12:49:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:09.187 12:49:49 -- setup/acl.sh@12 -- # devs=() 00:05:09.187 12:49:49 -- setup/acl.sh@12 -- # declare -a devs 00:05:09.187 12:49:49 -- setup/acl.sh@13 -- # drivers=() 00:05:09.187 12:49:49 -- setup/acl.sh@13 -- # declare -A drivers 00:05:09.187 12:49:49 -- setup/acl.sh@51 -- # setup reset 00:05:09.187 12:49:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.187 12:49:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.119 12:49:50 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:10.119 12:49:50 -- setup/acl.sh@16 -- # local dev driver 00:05:10.119 12:49:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.119 12:49:50 -- setup/acl.sh@15 -- # setup output status 00:05:10.119 12:49:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.119 12:49:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:10.119 Hugepages 00:05:10.119 node hugesize free / total 00:05:10.119 12:49:50 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:10.119 12:49:50 -- setup/acl.sh@19 -- # continue 00:05:10.119 12:49:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.119 00:05:10.119 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.119 12:49:50 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:10.119 12:49:50 -- setup/acl.sh@19 -- # continue 00:05:10.119 12:49:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.119 12:49:50 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:10.119 12:49:50 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:10.119 12:49:50 -- setup/acl.sh@20 -- # continue 00:05:10.119 12:49:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.119 12:49:50 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:10.119 12:49:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:10.119 12:49:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:10.119 12:49:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:10.119 12:49:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:10.119 12:49:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.377 12:49:50 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:10.377 12:49:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:10.377 12:49:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:10.377 12:49:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:10.377 12:49:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:10.377 12:49:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.377 12:49:50 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:10.377 12:49:50 -- setup/acl.sh@54 -- # run_test denied denied 00:05:10.377 12:49:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.377 12:49:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.377 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:05:10.377 ************************************ 00:05:10.377 START TEST denied 00:05:10.377 ************************************ 00:05:10.377 12:49:50 -- common/autotest_common.sh@1114 -- # denied 00:05:10.377 12:49:50 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:10.377 12:49:50 -- setup/acl.sh@38 -- # setup output config 00:05:10.377 12:49:50 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:10.377 12:49:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.377 12:49:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:11.313 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:11.313 12:49:51 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:11.313 12:49:51 -- setup/acl.sh@28 -- # local dev driver 00:05:11.313 12:49:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:11.313 12:49:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:11.313 12:49:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:11.313 12:49:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:11.313 12:49:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:11.313 12:49:51 -- setup/acl.sh@41 -- # setup reset 00:05:11.313 12:49:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.313 12:49:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.881 00:05:11.881 real 0m1.511s 00:05:11.881 user 0m0.612s 00:05:11.881 sys 0m0.859s 00:05:11.881 12:49:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.881 12:49:52 -- common/autotest_common.sh@10 -- # set +x 00:05:11.881 ************************************ 00:05:11.881 END TEST denied 00:05:11.881 ************************************ 00:05:11.881 12:49:52 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:11.881 12:49:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.881 12:49:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.881 12:49:52 -- common/autotest_common.sh@10 -- # set +x 00:05:11.881 ************************************ 00:05:11.881 START TEST allowed 00:05:11.881 ************************************ 00:05:11.881 12:49:52 -- common/autotest_common.sh@1114 -- # allowed 00:05:11.881 12:49:52 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:11.881 12:49:52 -- setup/acl.sh@45 -- # setup output config 00:05:11.881 12:49:52 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:11.881 12:49:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.881 12:49:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.817 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.817 12:49:53 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:12.817 12:49:53 -- setup/acl.sh@28 -- # local dev driver 00:05:12.817 12:49:53 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:12.817 12:49:53 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:12.817 12:49:53 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:12.817 12:49:53 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:12.817 12:49:53 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:12.817 12:49:53 -- setup/acl.sh@48 -- # setup reset 00:05:12.817 12:49:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.817 12:49:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.385 00:05:13.385 real 0m1.598s 00:05:13.385 user 0m0.719s 00:05:13.385 sys 0m0.884s 00:05:13.385 12:49:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.385 12:49:54 -- common/autotest_common.sh@10 -- # set +x 00:05:13.385 ************************************ 00:05:13.385 END TEST allowed 00:05:13.385 ************************************ 00:05:13.385 00:05:13.385 real 0m4.513s 00:05:13.385 user 0m1.991s 00:05:13.385 sys 0m2.506s 00:05:13.385 12:49:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.385 12:49:54 -- common/autotest_common.sh@10 -- # set +x 00:05:13.385 ************************************ 00:05:13.385 END TEST acl 00:05:13.385 ************************************ 00:05:13.644 12:49:54 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:13.644 12:49:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.644 12:49:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.644 12:49:54 -- common/autotest_common.sh@10 -- # set +x 00:05:13.644 ************************************ 00:05:13.644 START TEST hugepages 00:05:13.644 ************************************ 00:05:13.644 12:49:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:13.645 * Looking for test storage... 00:05:13.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:13.645 12:49:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:13.645 12:49:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:13.645 12:49:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:13.645 12:49:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:13.645 12:49:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:13.645 12:49:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:13.645 12:49:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:13.645 12:49:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:13.645 12:49:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:13.645 12:49:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.645 12:49:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:13.645 12:49:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:13.645 12:49:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:13.645 12:49:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:13.645 12:49:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:13.645 12:49:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:13.645 12:49:54 -- scripts/common.sh@344 -- # : 1 00:05:13.645 12:49:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:13.645 12:49:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.645 12:49:54 -- scripts/common.sh@364 -- # decimal 1 00:05:13.645 12:49:54 -- scripts/common.sh@352 -- # local d=1 00:05:13.645 12:49:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.645 12:49:54 -- scripts/common.sh@354 -- # echo 1 00:05:13.645 12:49:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:13.645 12:49:54 -- scripts/common.sh@365 -- # decimal 2 00:05:13.645 12:49:54 -- scripts/common.sh@352 -- # local d=2 00:05:13.645 12:49:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.645 12:49:54 -- scripts/common.sh@354 -- # echo 2 00:05:13.645 12:49:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:13.645 12:49:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:13.645 12:49:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:13.645 12:49:54 -- scripts/common.sh@367 -- # return 0 00:05:13.645 12:49:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.645 12:49:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:13.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.645 --rc genhtml_branch_coverage=1 00:05:13.645 --rc genhtml_function_coverage=1 00:05:13.645 --rc genhtml_legend=1 00:05:13.645 --rc geninfo_all_blocks=1 00:05:13.645 --rc geninfo_unexecuted_blocks=1 00:05:13.645 00:05:13.645 ' 00:05:13.645 12:49:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:13.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.645 --rc genhtml_branch_coverage=1 00:05:13.645 --rc genhtml_function_coverage=1 00:05:13.645 --rc genhtml_legend=1 00:05:13.645 --rc geninfo_all_blocks=1 00:05:13.645 --rc geninfo_unexecuted_blocks=1 00:05:13.645 00:05:13.645 ' 00:05:13.645 12:49:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:13.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.645 --rc genhtml_branch_coverage=1 00:05:13.645 --rc genhtml_function_coverage=1 00:05:13.645 --rc genhtml_legend=1 00:05:13.645 --rc geninfo_all_blocks=1 00:05:13.645 --rc geninfo_unexecuted_blocks=1 00:05:13.645 00:05:13.645 ' 00:05:13.645 12:49:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:13.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.645 --rc genhtml_branch_coverage=1 00:05:13.645 --rc genhtml_function_coverage=1 00:05:13.645 --rc genhtml_legend=1 00:05:13.645 --rc geninfo_all_blocks=1 00:05:13.645 --rc geninfo_unexecuted_blocks=1 00:05:13.645 00:05:13.645 ' 00:05:13.645 12:49:54 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:13.645 12:49:54 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:13.645 12:49:54 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:13.645 12:49:54 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:13.645 12:49:54 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:13.645 12:49:54 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:13.645 12:49:54 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:13.645 12:49:54 -- setup/common.sh@18 -- # local node= 00:05:13.645 12:49:54 -- setup/common.sh@19 -- # local var val 00:05:13.645 12:49:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.645 12:49:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.645 12:49:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.645 12:49:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.645 12:49:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.645 12:49:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 4689312 kB' 'MemAvailable: 7322412 kB' 'Buffers: 2684 kB' 'Cached: 2834752 kB' 'SwapCached: 0 kB' 'Active: 495756 kB' 'Inactive: 2457756 kB' 'Active(anon): 126588 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457756 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 117788 kB' 'Mapped: 50900 kB' 'Shmem: 10512 kB' 'KReclaimable: 86244 kB' 'Slab: 188240 kB' 'SReclaimable: 86244 kB' 'SUnreclaim: 101996 kB' 'KernelStack: 6768 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 310144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.645 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.645 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # continue 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.646 12:49:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.646 12:49:54 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.646 12:49:54 -- setup/common.sh@33 -- # echo 2048 00:05:13.646 12:49:54 -- setup/common.sh@33 -- # return 0 00:05:13.646 12:49:54 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:13.646 12:49:54 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:13.646 12:49:54 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:13.646 12:49:54 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:13.646 12:49:54 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:13.646 12:49:54 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:13.646 12:49:54 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:13.646 12:49:54 -- setup/hugepages.sh@207 -- # get_nodes 00:05:13.646 12:49:54 -- setup/hugepages.sh@27 -- # local node 00:05:13.646 12:49:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.646 12:49:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:13.646 12:49:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.646 12:49:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.646 12:49:54 -- setup/hugepages.sh@208 -- # clear_hp 00:05:13.646 12:49:54 -- setup/hugepages.sh@37 -- # local node hp 00:05:13.647 12:49:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:13.647 12:49:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.647 12:49:54 -- setup/hugepages.sh@41 -- # echo 0 00:05:13.647 12:49:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.647 12:49:54 -- setup/hugepages.sh@41 -- # echo 0 00:05:13.905 12:49:54 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:13.905 12:49:54 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:13.905 12:49:54 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:13.905 12:49:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.905 12:49:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.905 12:49:54 -- common/autotest_common.sh@10 -- # set +x 00:05:13.905 ************************************ 00:05:13.905 START TEST default_setup 00:05:13.905 ************************************ 00:05:13.905 12:49:54 -- common/autotest_common.sh@1114 -- # default_setup 00:05:13.905 12:49:54 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:13.905 12:49:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:13.905 12:49:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:13.905 12:49:54 -- setup/hugepages.sh@51 -- # shift 00:05:13.905 12:49:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:13.905 12:49:54 -- setup/hugepages.sh@52 -- # local node_ids 00:05:13.905 12:49:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.905 12:49:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:13.905 12:49:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:13.905 12:49:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:13.905 12:49:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.905 12:49:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.905 12:49:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.905 12:49:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.905 12:49:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.905 12:49:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:13.905 12:49:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:13.905 12:49:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:13.905 12:49:54 -- setup/hugepages.sh@73 -- # return 0 00:05:13.905 12:49:54 -- setup/hugepages.sh@137 -- # setup output 00:05:13.905 12:49:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.905 12:49:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.472 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:14.733 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:14.733 12:49:55 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:14.733 12:49:55 -- setup/hugepages.sh@89 -- # local node 00:05:14.733 12:49:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.733 12:49:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.733 12:49:55 -- setup/hugepages.sh@92 -- # local surp 00:05:14.733 12:49:55 -- setup/hugepages.sh@93 -- # local resv 00:05:14.733 12:49:55 -- setup/hugepages.sh@94 -- # local anon 00:05:14.733 12:49:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.733 12:49:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.733 12:49:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.733 12:49:55 -- setup/common.sh@18 -- # local node= 00:05:14.733 12:49:55 -- setup/common.sh@19 -- # local var val 00:05:14.733 12:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.733 12:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.733 12:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.733 12:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.733 12:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.733 12:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6733628 kB' 'MemAvailable: 9366568 kB' 'Buffers: 2684 kB' 'Cached: 2834744 kB' 'SwapCached: 0 kB' 'Active: 497912 kB' 'Inactive: 2457760 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119880 kB' 'Mapped: 51032 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187892 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101972 kB' 'KernelStack: 6816 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.733 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.733 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.734 12:49:55 -- setup/common.sh@33 -- # echo 0 00:05:14.734 12:49:55 -- setup/common.sh@33 -- # return 0 00:05:14.734 12:49:55 -- setup/hugepages.sh@97 -- # anon=0 00:05:14.734 12:49:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.734 12:49:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.734 12:49:55 -- setup/common.sh@18 -- # local node= 00:05:14.734 12:49:55 -- setup/common.sh@19 -- # local var val 00:05:14.734 12:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.734 12:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.734 12:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.734 12:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.734 12:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.734 12:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6733916 kB' 'MemAvailable: 9366864 kB' 'Buffers: 2684 kB' 'Cached: 2834744 kB' 'SwapCached: 0 kB' 'Active: 497748 kB' 'Inactive: 2457768 kB' 'Active(anon): 128580 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 50980 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187888 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101968 kB' 'KernelStack: 6768 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.734 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.734 12:49:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.735 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.735 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.736 12:49:55 -- setup/common.sh@33 -- # echo 0 00:05:14.736 12:49:55 -- setup/common.sh@33 -- # return 0 00:05:14.736 12:49:55 -- setup/hugepages.sh@99 -- # surp=0 00:05:14.736 12:49:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.736 12:49:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.736 12:49:55 -- setup/common.sh@18 -- # local node= 00:05:14.736 12:49:55 -- setup/common.sh@19 -- # local var val 00:05:14.736 12:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.736 12:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.736 12:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.736 12:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.736 12:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.736 12:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6733916 kB' 'MemAvailable: 9366864 kB' 'Buffers: 2684 kB' 'Cached: 2834744 kB' 'SwapCached: 0 kB' 'Active: 497596 kB' 'Inactive: 2457768 kB' 'Active(anon): 128428 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119560 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187888 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101968 kB' 'KernelStack: 6816 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.736 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.736 12:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.737 12:49:55 -- setup/common.sh@33 -- # echo 0 00:05:14.737 12:49:55 -- setup/common.sh@33 -- # return 0 00:05:14.737 12:49:55 -- setup/hugepages.sh@100 -- # resv=0 00:05:14.737 nr_hugepages=1024 00:05:14.737 12:49:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.737 resv_hugepages=0 00:05:14.737 12:49:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.737 surplus_hugepages=0 00:05:14.737 12:49:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.737 anon_hugepages=0 00:05:14.737 12:49:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.737 12:49:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.737 12:49:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.737 12:49:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.737 12:49:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.737 12:49:55 -- setup/common.sh@18 -- # local node= 00:05:14.737 12:49:55 -- setup/common.sh@19 -- # local var val 00:05:14.737 12:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.737 12:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.737 12:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.737 12:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.737 12:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.737 12:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6733916 kB' 'MemAvailable: 9366864 kB' 'Buffers: 2684 kB' 'Cached: 2834744 kB' 'SwapCached: 0 kB' 'Active: 497256 kB' 'Inactive: 2457768 kB' 'Active(anon): 128088 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119168 kB' 'Mapped: 50848 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187872 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101952 kB' 'KernelStack: 6688 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.737 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.737 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.738 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.738 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.739 12:49:55 -- setup/common.sh@33 -- # echo 1024 00:05:14.739 12:49:55 -- setup/common.sh@33 -- # return 0 00:05:14.739 12:49:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.739 12:49:55 -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.739 12:49:55 -- setup/hugepages.sh@27 -- # local node 00:05:14.739 12:49:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.739 12:49:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.739 12:49:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.739 12:49:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.739 12:49:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.739 12:49:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.739 12:49:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.739 12:49:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.739 12:49:55 -- setup/common.sh@18 -- # local node=0 00:05:14.739 12:49:55 -- setup/common.sh@19 -- # local var val 00:05:14.739 12:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.739 12:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.739 12:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.739 12:49:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.739 12:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.739 12:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6733672 kB' 'MemUsed: 5505444 kB' 'SwapCached: 0 kB' 'Active: 497588 kB' 'Inactive: 2457772 kB' 'Active(anon): 128420 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457772 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2837428 kB' 'Mapped: 50848 kB' 'AnonPages: 119504 kB' 'Shmem: 10488 kB' 'KernelStack: 6804 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85920 kB' 'Slab: 187872 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.739 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.739 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.740 12:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.740 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.740 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.740 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.740 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.740 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.740 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.740 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.740 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.740 12:49:55 -- setup/common.sh@32 -- # continue 00:05:14.740 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.740 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.740 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.740 12:49:55 -- setup/common.sh@33 -- # echo 0 00:05:14.740 12:49:55 -- setup/common.sh@33 -- # return 0 00:05:14.740 12:49:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.740 12:49:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.740 12:49:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.740 12:49:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.740 node0=1024 expecting 1024 00:05:14.740 12:49:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.740 12:49:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.740 00:05:14.740 real 0m1.007s 00:05:14.740 user 0m0.494s 00:05:14.740 sys 0m0.458s 00:05:14.740 12:49:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.740 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:05:14.740 ************************************ 00:05:14.740 END TEST default_setup 00:05:14.740 ************************************ 00:05:14.740 12:49:55 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:14.740 12:49:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.740 12:49:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.740 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:05:14.740 ************************************ 00:05:14.740 START TEST per_node_1G_alloc 00:05:14.740 ************************************ 00:05:14.740 12:49:55 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:14.740 12:49:55 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:14.740 12:49:55 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:14.740 12:49:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:14.740 12:49:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:14.740 12:49:55 -- setup/hugepages.sh@51 -- # shift 00:05:14.740 12:49:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:14.740 12:49:55 -- setup/hugepages.sh@52 -- # local node_ids 00:05:14.740 12:49:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.740 12:49:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:14.740 12:49:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:14.740 12:49:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:14.740 12:49:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.740 12:49:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:14.740 12:49:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:14.740 12:49:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.740 12:49:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.740 12:49:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:14.740 12:49:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:14.740 12:49:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:14.740 12:49:55 -- setup/hugepages.sh@73 -- # return 0 00:05:14.740 12:49:55 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:14.740 12:49:55 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:14.740 12:49:55 -- setup/hugepages.sh@146 -- # setup output 00:05:14.740 12:49:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.740 12:49:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.311 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.311 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.311 12:49:55 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:15.311 12:49:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:15.311 12:49:55 -- setup/hugepages.sh@89 -- # local node 00:05:15.311 12:49:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.311 12:49:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.311 12:49:55 -- setup/hugepages.sh@92 -- # local surp 00:05:15.311 12:49:55 -- setup/hugepages.sh@93 -- # local resv 00:05:15.311 12:49:55 -- setup/hugepages.sh@94 -- # local anon 00:05:15.311 12:49:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.311 12:49:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.311 12:49:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.311 12:49:55 -- setup/common.sh@18 -- # local node= 00:05:15.311 12:49:55 -- setup/common.sh@19 -- # local var val 00:05:15.311 12:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.311 12:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.311 12:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.311 12:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.311 12:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.311 12:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7779060 kB' 'MemAvailable: 10412012 kB' 'Buffers: 2684 kB' 'Cached: 2834744 kB' 'SwapCached: 0 kB' 'Active: 498116 kB' 'Inactive: 2457772 kB' 'Active(anon): 128948 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457772 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 50960 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187920 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 102000 kB' 'KernelStack: 6792 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.311 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.311 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.312 12:49:55 -- setup/common.sh@33 -- # echo 0 00:05:15.312 12:49:55 -- setup/common.sh@33 -- # return 0 00:05:15.312 12:49:55 -- setup/hugepages.sh@97 -- # anon=0 00:05:15.312 12:49:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.312 12:49:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.312 12:49:55 -- setup/common.sh@18 -- # local node= 00:05:15.312 12:49:55 -- setup/common.sh@19 -- # local var val 00:05:15.312 12:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.312 12:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.312 12:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.312 12:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.312 12:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.312 12:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7779060 kB' 'MemAvailable: 10412012 kB' 'Buffers: 2684 kB' 'Cached: 2834744 kB' 'SwapCached: 0 kB' 'Active: 497744 kB' 'Inactive: 2457772 kB' 'Active(anon): 128576 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457772 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119664 kB' 'Mapped: 50960 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187920 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 102000 kB' 'KernelStack: 6728 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.312 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.312 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.313 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.313 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.314 12:49:55 -- setup/common.sh@33 -- # echo 0 00:05:15.314 12:49:55 -- setup/common.sh@33 -- # return 0 00:05:15.314 12:49:55 -- setup/hugepages.sh@99 -- # surp=0 00:05:15.314 12:49:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.314 12:49:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.314 12:49:55 -- setup/common.sh@18 -- # local node= 00:05:15.314 12:49:55 -- setup/common.sh@19 -- # local var val 00:05:15.314 12:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.314 12:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.314 12:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.314 12:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.314 12:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.314 12:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7779116 kB' 'MemAvailable: 10412068 kB' 'Buffers: 2684 kB' 'Cached: 2834744 kB' 'SwapCached: 0 kB' 'Active: 497612 kB' 'Inactive: 2457772 kB' 'Active(anon): 128444 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457772 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119624 kB' 'Mapped: 50848 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187940 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 102020 kB' 'KernelStack: 6816 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:55 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.314 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.314 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.314 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.314 12:49:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.315 12:49:56 -- setup/common.sh@33 -- # echo 0 00:05:15.315 12:49:56 -- setup/common.sh@33 -- # return 0 00:05:15.315 12:49:56 -- setup/hugepages.sh@100 -- # resv=0 00:05:15.315 nr_hugepages=512 00:05:15.315 12:49:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:15.315 resv_hugepages=0 00:05:15.315 12:49:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.315 surplus_hugepages=0 00:05:15.315 12:49:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.315 anon_hugepages=0 00:05:15.315 12:49:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.315 12:49:56 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:15.315 12:49:56 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:15.315 12:49:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.315 12:49:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.315 12:49:56 -- setup/common.sh@18 -- # local node= 00:05:15.315 12:49:56 -- setup/common.sh@19 -- # local var val 00:05:15.315 12:49:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.315 12:49:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.315 12:49:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.315 12:49:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.315 12:49:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.315 12:49:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7779116 kB' 'MemAvailable: 10412068 kB' 'Buffers: 2684 kB' 'Cached: 2834744 kB' 'SwapCached: 0 kB' 'Active: 497616 kB' 'Inactive: 2457772 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457772 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119564 kB' 'Mapped: 50796 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187912 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101992 kB' 'KernelStack: 6768 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.315 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.315 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.316 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.316 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.317 12:49:56 -- setup/common.sh@33 -- # echo 512 00:05:15.317 12:49:56 -- setup/common.sh@33 -- # return 0 00:05:15.317 12:49:56 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:15.317 12:49:56 -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.317 12:49:56 -- setup/hugepages.sh@27 -- # local node 00:05:15.317 12:49:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.317 12:49:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:15.317 12:49:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.317 12:49:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.317 12:49:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.317 12:49:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.317 12:49:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.317 12:49:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.317 12:49:56 -- setup/common.sh@18 -- # local node=0 00:05:15.317 12:49:56 -- setup/common.sh@19 -- # local var val 00:05:15.317 12:49:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.317 12:49:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.317 12:49:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.317 12:49:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.317 12:49:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.317 12:49:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7779116 kB' 'MemUsed: 4460000 kB' 'SwapCached: 0 kB' 'Active: 497500 kB' 'Inactive: 2457772 kB' 'Active(anon): 128332 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457772 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2837428 kB' 'Mapped: 50848 kB' 'AnonPages: 119476 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85920 kB' 'Slab: 187908 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.317 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.317 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.318 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.318 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.318 12:49:56 -- setup/common.sh@33 -- # echo 0 00:05:15.318 12:49:56 -- setup/common.sh@33 -- # return 0 00:05:15.318 12:49:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.318 12:49:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.318 12:49:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.318 12:49:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.318 node0=512 expecting 512 00:05:15.318 12:49:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:15.318 12:49:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:15.318 00:05:15.318 real 0m0.565s 00:05:15.318 user 0m0.277s 00:05:15.318 sys 0m0.295s 00:05:15.318 12:49:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.318 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:15.318 ************************************ 00:05:15.318 END TEST per_node_1G_alloc 00:05:15.318 ************************************ 00:05:15.577 12:49:56 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:15.577 12:49:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.577 12:49:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.577 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:15.577 ************************************ 00:05:15.577 START TEST even_2G_alloc 00:05:15.577 ************************************ 00:05:15.577 12:49:56 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:15.577 12:49:56 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:15.577 12:49:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:15.577 12:49:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:15.577 12:49:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.577 12:49:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:15.577 12:49:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:15.577 12:49:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:15.577 12:49:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.577 12:49:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:15.577 12:49:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.577 12:49:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.577 12:49:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.577 12:49:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:15.577 12:49:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:15.577 12:49:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:15.577 12:49:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:15.577 12:49:56 -- setup/hugepages.sh@83 -- # : 0 00:05:15.577 12:49:56 -- setup/hugepages.sh@84 -- # : 0 00:05:15.577 12:49:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:15.577 12:49:56 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:15.577 12:49:56 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:15.577 12:49:56 -- setup/hugepages.sh@153 -- # setup output 00:05:15.577 12:49:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.577 12:49:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.838 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.838 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.838 12:49:56 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:15.838 12:49:56 -- setup/hugepages.sh@89 -- # local node 00:05:15.838 12:49:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.838 12:49:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.838 12:49:56 -- setup/hugepages.sh@92 -- # local surp 00:05:15.838 12:49:56 -- setup/hugepages.sh@93 -- # local resv 00:05:15.838 12:49:56 -- setup/hugepages.sh@94 -- # local anon 00:05:15.838 12:49:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.838 12:49:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.838 12:49:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.838 12:49:56 -- setup/common.sh@18 -- # local node= 00:05:15.838 12:49:56 -- setup/common.sh@19 -- # local var val 00:05:15.838 12:49:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.838 12:49:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.838 12:49:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.838 12:49:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.838 12:49:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.838 12:49:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.838 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.838 12:49:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6741076 kB' 'MemAvailable: 9374032 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 498332 kB' 'Inactive: 2457776 kB' 'Active(anon): 129164 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187908 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101988 kB' 'KernelStack: 6804 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:15.838 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.838 12:49:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.838 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.838 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.838 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.838 12:49:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.838 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.838 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.838 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.838 12:49:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.838 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.838 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.838 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.838 12:49:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.838 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.839 12:49:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.839 12:49:56 -- setup/common.sh@33 -- # echo 0 00:05:15.839 12:49:56 -- setup/common.sh@33 -- # return 0 00:05:15.839 12:49:56 -- setup/hugepages.sh@97 -- # anon=0 00:05:15.839 12:49:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.839 12:49:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.839 12:49:56 -- setup/common.sh@18 -- # local node= 00:05:15.839 12:49:56 -- setup/common.sh@19 -- # local var val 00:05:15.839 12:49:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.839 12:49:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.839 12:49:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.839 12:49:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.839 12:49:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.839 12:49:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.839 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6741076 kB' 'MemAvailable: 9374032 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497900 kB' 'Inactive: 2457776 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119572 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187912 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101992 kB' 'KernelStack: 6772 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.840 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.840 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.841 12:49:56 -- setup/common.sh@33 -- # echo 0 00:05:15.841 12:49:56 -- setup/common.sh@33 -- # return 0 00:05:15.841 12:49:56 -- setup/hugepages.sh@99 -- # surp=0 00:05:15.841 12:49:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.841 12:49:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.841 12:49:56 -- setup/common.sh@18 -- # local node= 00:05:15.841 12:49:56 -- setup/common.sh@19 -- # local var val 00:05:15.841 12:49:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.841 12:49:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.841 12:49:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.841 12:49:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.841 12:49:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.841 12:49:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6740824 kB' 'MemAvailable: 9373780 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497444 kB' 'Inactive: 2457776 kB' 'Active(anon): 128276 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119392 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187924 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 102004 kB' 'KernelStack: 6784 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.841 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.841 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.842 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.842 12:49:56 -- setup/common.sh@33 -- # echo 0 00:05:15.842 12:49:56 -- setup/common.sh@33 -- # return 0 00:05:15.842 12:49:56 -- setup/hugepages.sh@100 -- # resv=0 00:05:15.842 nr_hugepages=1024 00:05:15.842 12:49:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.842 resv_hugepages=0 00:05:15.842 12:49:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.842 surplus_hugepages=0 00:05:15.842 12:49:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.842 anon_hugepages=0 00:05:15.842 12:49:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.842 12:49:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.842 12:49:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.842 12:49:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.842 12:49:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.842 12:49:56 -- setup/common.sh@18 -- # local node= 00:05:15.842 12:49:56 -- setup/common.sh@19 -- # local var val 00:05:15.842 12:49:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.842 12:49:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.842 12:49:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.842 12:49:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.842 12:49:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.842 12:49:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.842 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.843 12:49:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6740824 kB' 'MemAvailable: 9373780 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497656 kB' 'Inactive: 2457776 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119604 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187924 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 102004 kB' 'KernelStack: 6768 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:15.843 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.843 12:49:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.843 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.843 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.843 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.843 12:49:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.843 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.843 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.843 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.843 12:49:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.843 12:49:56 -- setup/common.sh@32 -- # continue 00:05:15.843 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.843 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.843 12:49:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.843 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.103 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.103 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.104 12:49:56 -- setup/common.sh@33 -- # echo 1024 00:05:16.104 12:49:56 -- setup/common.sh@33 -- # return 0 00:05:16.104 12:49:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.104 12:49:56 -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.104 12:49:56 -- setup/hugepages.sh@27 -- # local node 00:05:16.104 12:49:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.104 12:49:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.104 12:49:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.104 12:49:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.104 12:49:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.104 12:49:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.104 12:49:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.104 12:49:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.104 12:49:56 -- setup/common.sh@18 -- # local node=0 00:05:16.104 12:49:56 -- setup/common.sh@19 -- # local var val 00:05:16.104 12:49:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.104 12:49:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.104 12:49:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.104 12:49:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.104 12:49:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.104 12:49:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6740824 kB' 'MemUsed: 5498292 kB' 'SwapCached: 0 kB' 'Active: 497712 kB' 'Inactive: 2457776 kB' 'Active(anon): 128544 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2837432 kB' 'Mapped: 50844 kB' 'AnonPages: 119624 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85920 kB' 'Slab: 187928 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 102008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.104 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.104 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # continue 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.105 12:49:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.105 12:49:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.105 12:49:56 -- setup/common.sh@33 -- # echo 0 00:05:16.105 12:49:56 -- setup/common.sh@33 -- # return 0 00:05:16.105 12:49:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.105 12:49:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.105 12:49:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.105 12:49:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.105 node0=1024 expecting 1024 00:05:16.105 12:49:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.105 12:49:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.105 00:05:16.105 real 0m0.549s 00:05:16.105 user 0m0.279s 00:05:16.105 sys 0m0.304s 00:05:16.105 12:49:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.105 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:16.105 ************************************ 00:05:16.105 END TEST even_2G_alloc 00:05:16.105 ************************************ 00:05:16.105 12:49:56 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:16.105 12:49:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.105 12:49:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.105 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:16.105 ************************************ 00:05:16.105 START TEST odd_alloc 00:05:16.105 ************************************ 00:05:16.105 12:49:56 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:16.105 12:49:56 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:16.105 12:49:56 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:16.105 12:49:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.105 12:49:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.105 12:49:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:16.105 12:49:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.105 12:49:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.105 12:49:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.105 12:49:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:16.105 12:49:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.105 12:49:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.105 12:49:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.105 12:49:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.105 12:49:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:16.105 12:49:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.105 12:49:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:16.105 12:49:56 -- setup/hugepages.sh@83 -- # : 0 00:05:16.105 12:49:56 -- setup/hugepages.sh@84 -- # : 0 00:05:16.105 12:49:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.105 12:49:56 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:16.105 12:49:56 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:16.105 12:49:56 -- setup/hugepages.sh@160 -- # setup output 00:05:16.105 12:49:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.105 12:49:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.364 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.364 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.364 12:49:57 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:16.364 12:49:57 -- setup/hugepages.sh@89 -- # local node 00:05:16.364 12:49:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.364 12:49:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.364 12:49:57 -- setup/hugepages.sh@92 -- # local surp 00:05:16.364 12:49:57 -- setup/hugepages.sh@93 -- # local resv 00:05:16.364 12:49:57 -- setup/hugepages.sh@94 -- # local anon 00:05:16.364 12:49:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.364 12:49:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.364 12:49:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.364 12:49:57 -- setup/common.sh@18 -- # local node= 00:05:16.364 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:16.364 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.364 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.364 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.364 12:49:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.364 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.364 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6732952 kB' 'MemAvailable: 9365908 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 498080 kB' 'Inactive: 2457776 kB' 'Active(anon): 128912 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119984 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187896 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101976 kB' 'KernelStack: 6760 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.627 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.627 12:49:57 -- setup/common.sh@33 -- # echo 0 00:05:16.627 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:16.627 12:49:57 -- setup/hugepages.sh@97 -- # anon=0 00:05:16.627 12:49:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.627 12:49:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.627 12:49:57 -- setup/common.sh@18 -- # local node= 00:05:16.627 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:16.627 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.627 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.627 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.627 12:49:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.627 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.627 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.627 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6732704 kB' 'MemAvailable: 9365660 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497668 kB' 'Inactive: 2457776 kB' 'Active(anon): 128500 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119584 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187908 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101988 kB' 'KernelStack: 6768 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.628 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.628 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.629 12:49:57 -- setup/common.sh@33 -- # echo 0 00:05:16.629 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:16.629 12:49:57 -- setup/hugepages.sh@99 -- # surp=0 00:05:16.629 12:49:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.629 12:49:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.629 12:49:57 -- setup/common.sh@18 -- # local node= 00:05:16.629 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:16.629 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.629 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.629 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.629 12:49:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.629 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.629 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6732704 kB' 'MemAvailable: 9365660 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497724 kB' 'Inactive: 2457776 kB' 'Active(anon): 128556 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119660 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187908 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101988 kB' 'KernelStack: 6784 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.629 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.629 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.630 12:49:57 -- setup/common.sh@33 -- # echo 0 00:05:16.630 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:16.630 12:49:57 -- setup/hugepages.sh@100 -- # resv=0 00:05:16.630 nr_hugepages=1025 00:05:16.630 12:49:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:16.630 resv_hugepages=0 00:05:16.630 12:49:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.630 surplus_hugepages=0 00:05:16.630 12:49:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.630 anon_hugepages=0 00:05:16.630 12:49:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.630 12:49:57 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:16.630 12:49:57 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:16.630 12:49:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.630 12:49:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.630 12:49:57 -- setup/common.sh@18 -- # local node= 00:05:16.630 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:16.630 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.630 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.630 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.630 12:49:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.630 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.630 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6732704 kB' 'MemAvailable: 9365660 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497772 kB' 'Inactive: 2457776 kB' 'Active(anon): 128604 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119692 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187892 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101972 kB' 'KernelStack: 6768 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.630 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.630 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.631 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.631 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.632 12:49:57 -- setup/common.sh@33 -- # echo 1025 00:05:16.632 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:16.632 12:49:57 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:16.632 12:49:57 -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.632 12:49:57 -- setup/hugepages.sh@27 -- # local node 00:05:16.632 12:49:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.632 12:49:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:16.632 12:49:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.632 12:49:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.632 12:49:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.632 12:49:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.632 12:49:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.632 12:49:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.632 12:49:57 -- setup/common.sh@18 -- # local node=0 00:05:16.632 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:16.632 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.632 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.632 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.632 12:49:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.632 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.632 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6739364 kB' 'MemUsed: 5499752 kB' 'SwapCached: 0 kB' 'Active: 497760 kB' 'Inactive: 2457776 kB' 'Active(anon): 128592 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2837432 kB' 'Mapped: 50844 kB' 'AnonPages: 119684 kB' 'Shmem: 10488 kB' 'KernelStack: 6784 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85920 kB' 'Slab: 187892 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.632 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.632 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # continue 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.633 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.633 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.633 12:49:57 -- setup/common.sh@33 -- # echo 0 00:05:16.633 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:16.633 12:49:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.633 12:49:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.633 12:49:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.633 node0=1025 expecting 1025 00:05:16.633 12:49:57 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:16.633 12:49:57 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:16.633 00:05:16.633 real 0m0.574s 00:05:16.633 user 0m0.275s 00:05:16.633 sys 0m0.338s 00:05:16.633 12:49:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.633 12:49:57 -- common/autotest_common.sh@10 -- # set +x 00:05:16.633 ************************************ 00:05:16.633 END TEST odd_alloc 00:05:16.633 ************************************ 00:05:16.633 12:49:57 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:16.633 12:49:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.633 12:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.633 12:49:57 -- common/autotest_common.sh@10 -- # set +x 00:05:16.633 ************************************ 00:05:16.633 START TEST custom_alloc 00:05:16.633 ************************************ 00:05:16.633 12:49:57 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:16.633 12:49:57 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:16.633 12:49:57 -- setup/hugepages.sh@169 -- # local node 00:05:16.633 12:49:57 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:16.633 12:49:57 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:16.633 12:49:57 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:16.633 12:49:57 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:16.633 12:49:57 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:16.633 12:49:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:16.633 12:49:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.633 12:49:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.633 12:49:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.633 12:49:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:16.633 12:49:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.633 12:49:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.633 12:49:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.633 12:49:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:16.633 12:49:57 -- setup/hugepages.sh@83 -- # : 0 00:05:16.633 12:49:57 -- setup/hugepages.sh@84 -- # : 0 00:05:16.633 12:49:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:16.633 12:49:57 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:16.633 12:49:57 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:16.633 12:49:57 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:16.633 12:49:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.633 12:49:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.633 12:49:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:16.633 12:49:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.633 12:49:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.633 12:49:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.633 12:49:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:16.633 12:49:57 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:16.633 12:49:57 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:16.633 12:49:57 -- setup/hugepages.sh@78 -- # return 0 00:05:16.633 12:49:57 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:16.633 12:49:57 -- setup/hugepages.sh@187 -- # setup output 00:05:16.633 12:49:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.633 12:49:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.238 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.238 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.238 12:49:57 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:17.238 12:49:57 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:17.238 12:49:57 -- setup/hugepages.sh@89 -- # local node 00:05:17.238 12:49:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.238 12:49:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.238 12:49:57 -- setup/hugepages.sh@92 -- # local surp 00:05:17.238 12:49:57 -- setup/hugepages.sh@93 -- # local resv 00:05:17.238 12:49:57 -- setup/hugepages.sh@94 -- # local anon 00:05:17.238 12:49:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.238 12:49:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.238 12:49:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.238 12:49:57 -- setup/common.sh@18 -- # local node= 00:05:17.238 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:17.238 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.238 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.238 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.238 12:49:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.238 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.238 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7788216 kB' 'MemAvailable: 10421172 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497956 kB' 'Inactive: 2457776 kB' 'Active(anon): 128788 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120128 kB' 'Mapped: 50964 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187896 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101976 kB' 'KernelStack: 6820 kB' 'PageTables: 4644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.238 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.238 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.239 12:49:57 -- setup/common.sh@33 -- # echo 0 00:05:17.239 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:17.239 12:49:57 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.239 12:49:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.239 12:49:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.239 12:49:57 -- setup/common.sh@18 -- # local node= 00:05:17.239 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:17.239 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.239 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.239 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.239 12:49:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.239 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.239 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7788532 kB' 'MemAvailable: 10421488 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497828 kB' 'Inactive: 2457776 kB' 'Active(anon): 128660 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119956 kB' 'Mapped: 50964 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187896 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101976 kB' 'KernelStack: 6788 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.239 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.239 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.240 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.240 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.240 12:49:57 -- setup/common.sh@33 -- # echo 0 00:05:17.240 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:17.240 12:49:57 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.240 12:49:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.240 12:49:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.240 12:49:57 -- setup/common.sh@18 -- # local node= 00:05:17.240 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:17.240 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.240 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.240 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.240 12:49:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.240 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.240 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7788280 kB' 'MemAvailable: 10421236 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497672 kB' 'Inactive: 2457776 kB' 'Active(anon): 128504 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119760 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187880 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101960 kB' 'KernelStack: 6780 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.241 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.241 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.242 12:49:57 -- setup/common.sh@33 -- # echo 0 00:05:17.242 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:17.242 12:49:57 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.242 nr_hugepages=512 00:05:17.242 12:49:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:17.242 resv_hugepages=0 00:05:17.242 12:49:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.242 surplus_hugepages=0 00:05:17.242 12:49:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.242 anon_hugepages=0 00:05:17.242 12:49:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.242 12:49:57 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.242 12:49:57 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:17.242 12:49:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.242 12:49:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.242 12:49:57 -- setup/common.sh@18 -- # local node= 00:05:17.242 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:17.242 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.242 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.242 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.242 12:49:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.242 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.242 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7788280 kB' 'MemAvailable: 10421236 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497788 kB' 'Inactive: 2457776 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187880 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101960 kB' 'KernelStack: 6812 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.242 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.242 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.243 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.243 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.243 12:49:57 -- setup/common.sh@33 -- # echo 512 00:05:17.243 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:17.243 12:49:57 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.243 12:49:57 -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.243 12:49:57 -- setup/hugepages.sh@27 -- # local node 00:05:17.243 12:49:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.243 12:49:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:17.243 12:49:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.243 12:49:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.243 12:49:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.244 12:49:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.244 12:49:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.244 12:49:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.244 12:49:57 -- setup/common.sh@18 -- # local node=0 00:05:17.244 12:49:57 -- setup/common.sh@19 -- # local var val 00:05:17.244 12:49:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.244 12:49:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.244 12:49:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.244 12:49:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.244 12:49:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.244 12:49:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7788280 kB' 'MemUsed: 4450836 kB' 'SwapCached: 0 kB' 'Active: 497756 kB' 'Inactive: 2457776 kB' 'Active(anon): 128588 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2837432 kB' 'Mapped: 50844 kB' 'AnonPages: 119876 kB' 'Shmem: 10488 kB' 'KernelStack: 6812 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85920 kB' 'Slab: 187880 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.244 12:49:57 -- setup/common.sh@32 -- # continue 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.244 12:49:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.245 12:49:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.245 12:49:57 -- setup/common.sh@33 -- # echo 0 00:05:17.245 12:49:57 -- setup/common.sh@33 -- # return 0 00:05:17.245 12:49:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.245 12:49:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.245 12:49:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.245 12:49:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.245 node0=512 expecting 512 00:05:17.245 12:49:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:17.245 12:49:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:17.245 00:05:17.245 real 0m0.573s 00:05:17.245 user 0m0.277s 00:05:17.245 sys 0m0.332s 00:05:17.245 12:49:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.245 12:49:57 -- common/autotest_common.sh@10 -- # set +x 00:05:17.245 ************************************ 00:05:17.245 END TEST custom_alloc 00:05:17.245 ************************************ 00:05:17.245 12:49:57 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:17.245 12:49:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.245 12:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.245 12:49:57 -- common/autotest_common.sh@10 -- # set +x 00:05:17.245 ************************************ 00:05:17.245 START TEST no_shrink_alloc 00:05:17.245 ************************************ 00:05:17.245 12:49:57 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:17.245 12:49:57 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:17.245 12:49:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.245 12:49:57 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:17.245 12:49:57 -- setup/hugepages.sh@51 -- # shift 00:05:17.245 12:49:57 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:17.245 12:49:57 -- setup/hugepages.sh@52 -- # local node_ids 00:05:17.245 12:49:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.245 12:49:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.245 12:49:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:17.245 12:49:57 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:17.245 12:49:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.245 12:49:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.245 12:49:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.245 12:49:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.245 12:49:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.245 12:49:57 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:17.245 12:49:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:17.245 12:49:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:17.245 12:49:57 -- setup/hugepages.sh@73 -- # return 0 00:05:17.245 12:49:57 -- setup/hugepages.sh@198 -- # setup output 00:05:17.245 12:49:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.245 12:49:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.816 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.816 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.816 12:49:58 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:17.816 12:49:58 -- setup/hugepages.sh@89 -- # local node 00:05:17.816 12:49:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.816 12:49:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.816 12:49:58 -- setup/hugepages.sh@92 -- # local surp 00:05:17.816 12:49:58 -- setup/hugepages.sh@93 -- # local resv 00:05:17.816 12:49:58 -- setup/hugepages.sh@94 -- # local anon 00:05:17.816 12:49:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.816 12:49:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.816 12:49:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.816 12:49:58 -- setup/common.sh@18 -- # local node= 00:05:17.816 12:49:58 -- setup/common.sh@19 -- # local var val 00:05:17.816 12:49:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.816 12:49:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.816 12:49:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.816 12:49:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.816 12:49:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.816 12:49:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6746528 kB' 'MemAvailable: 9379484 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 498216 kB' 'Inactive: 2457776 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120132 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187916 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101996 kB' 'KernelStack: 6808 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.816 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.816 12:49:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.817 12:49:58 -- setup/common.sh@33 -- # echo 0 00:05:17.817 12:49:58 -- setup/common.sh@33 -- # return 0 00:05:17.817 12:49:58 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.817 12:49:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.817 12:49:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.817 12:49:58 -- setup/common.sh@18 -- # local node= 00:05:17.817 12:49:58 -- setup/common.sh@19 -- # local var val 00:05:17.817 12:49:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.817 12:49:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.817 12:49:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.817 12:49:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.817 12:49:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.817 12:49:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6746528 kB' 'MemAvailable: 9379484 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 498012 kB' 'Inactive: 2457776 kB' 'Active(anon): 128844 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119924 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187908 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101988 kB' 'KernelStack: 6744 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.817 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.817 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.818 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.818 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.818 12:49:58 -- setup/common.sh@33 -- # echo 0 00:05:17.818 12:49:58 -- setup/common.sh@33 -- # return 0 00:05:17.818 12:49:58 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.818 12:49:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.818 12:49:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.819 12:49:58 -- setup/common.sh@18 -- # local node= 00:05:17.819 12:49:58 -- setup/common.sh@19 -- # local var val 00:05:17.819 12:49:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.819 12:49:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.819 12:49:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.819 12:49:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.819 12:49:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.819 12:49:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6746528 kB' 'MemAvailable: 9379484 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497964 kB' 'Inactive: 2457776 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 51104 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187912 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101992 kB' 'KernelStack: 6816 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.819 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.819 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.820 12:49:58 -- setup/common.sh@33 -- # echo 0 00:05:17.820 12:49:58 -- setup/common.sh@33 -- # return 0 00:05:17.820 12:49:58 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.820 nr_hugepages=1024 00:05:17.820 12:49:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.820 resv_hugepages=0 00:05:17.820 surplus_hugepages=0 00:05:17.820 anon_hugepages=0 00:05:17.820 12:49:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.820 12:49:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.820 12:49:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.820 12:49:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.820 12:49:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.820 12:49:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.820 12:49:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.820 12:49:58 -- setup/common.sh@18 -- # local node= 00:05:17.820 12:49:58 -- setup/common.sh@19 -- # local var val 00:05:17.820 12:49:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.820 12:49:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.820 12:49:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.820 12:49:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.820 12:49:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.820 12:49:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6746780 kB' 'MemAvailable: 9379736 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 497608 kB' 'Inactive: 2457776 kB' 'Active(anon): 128440 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119596 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85920 kB' 'Slab: 187884 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101964 kB' 'KernelStack: 6768 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.820 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.820 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.821 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.821 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.822 12:49:58 -- setup/common.sh@33 -- # echo 1024 00:05:17.822 12:49:58 -- setup/common.sh@33 -- # return 0 00:05:17.822 12:49:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.822 12:49:58 -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.822 12:49:58 -- setup/hugepages.sh@27 -- # local node 00:05:17.822 12:49:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.822 12:49:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:17.822 12:49:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.822 12:49:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.822 12:49:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.822 12:49:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.822 12:49:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.822 12:49:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.822 12:49:58 -- setup/common.sh@18 -- # local node=0 00:05:17.822 12:49:58 -- setup/common.sh@19 -- # local var val 00:05:17.822 12:49:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.822 12:49:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.822 12:49:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.822 12:49:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.822 12:49:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.822 12:49:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6746280 kB' 'MemUsed: 5492836 kB' 'SwapCached: 0 kB' 'Active: 497696 kB' 'Inactive: 2457776 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2837432 kB' 'Mapped: 50844 kB' 'AnonPages: 119688 kB' 'Shmem: 10488 kB' 'KernelStack: 6784 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85920 kB' 'Slab: 187876 kB' 'SReclaimable: 85920 kB' 'SUnreclaim: 101956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.822 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.822 12:49:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.823 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.823 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.823 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.823 12:49:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.823 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.823 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.823 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.823 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.823 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.823 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.823 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.823 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.823 12:49:58 -- setup/common.sh@32 -- # continue 00:05:17.823 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.823 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.823 12:49:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.823 12:49:58 -- setup/common.sh@33 -- # echo 0 00:05:17.823 12:49:58 -- setup/common.sh@33 -- # return 0 00:05:17.823 12:49:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.823 12:49:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.823 12:49:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.823 12:49:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.823 node0=1024 expecting 1024 00:05:17.823 12:49:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:17.823 12:49:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:17.823 12:49:58 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:17.823 12:49:58 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:17.823 12:49:58 -- setup/hugepages.sh@202 -- # setup output 00:05:17.823 12:49:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.823 12:49:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.394 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.394 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.394 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:18.394 12:49:58 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:18.394 12:49:58 -- setup/hugepages.sh@89 -- # local node 00:05:18.394 12:49:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.394 12:49:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.394 12:49:58 -- setup/hugepages.sh@92 -- # local surp 00:05:18.394 12:49:58 -- setup/hugepages.sh@93 -- # local resv 00:05:18.394 12:49:58 -- setup/hugepages.sh@94 -- # local anon 00:05:18.394 12:49:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.394 12:49:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.394 12:49:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.394 12:49:58 -- setup/common.sh@18 -- # local node= 00:05:18.394 12:49:58 -- setup/common.sh@19 -- # local var val 00:05:18.394 12:49:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.394 12:49:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.394 12:49:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.394 12:49:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.394 12:49:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.394 12:49:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6748896 kB' 'MemAvailable: 9381848 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 495364 kB' 'Inactive: 2457776 kB' 'Active(anon): 126196 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117292 kB' 'Mapped: 50112 kB' 'Shmem: 10488 kB' 'KReclaimable: 85912 kB' 'Slab: 187624 kB' 'SReclaimable: 85912 kB' 'SUnreclaim: 101712 kB' 'KernelStack: 6728 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.394 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.394 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.395 12:49:58 -- setup/common.sh@33 -- # echo 0 00:05:18.395 12:49:58 -- setup/common.sh@33 -- # return 0 00:05:18.395 12:49:58 -- setup/hugepages.sh@97 -- # anon=0 00:05:18.395 12:49:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.395 12:49:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.395 12:49:58 -- setup/common.sh@18 -- # local node= 00:05:18.395 12:49:58 -- setup/common.sh@19 -- # local var val 00:05:18.395 12:49:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.395 12:49:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.395 12:49:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.395 12:49:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.395 12:49:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.395 12:49:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6748896 kB' 'MemAvailable: 9381848 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 495068 kB' 'Inactive: 2457776 kB' 'Active(anon): 125900 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116948 kB' 'Mapped: 50056 kB' 'Shmem: 10488 kB' 'KReclaimable: 85912 kB' 'Slab: 187628 kB' 'SReclaimable: 85912 kB' 'SUnreclaim: 101716 kB' 'KernelStack: 6672 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.395 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.395 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:58 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.396 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.396 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.397 12:49:59 -- setup/common.sh@33 -- # echo 0 00:05:18.397 12:49:59 -- setup/common.sh@33 -- # return 0 00:05:18.397 12:49:59 -- setup/hugepages.sh@99 -- # surp=0 00:05:18.397 12:49:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.397 12:49:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.397 12:49:59 -- setup/common.sh@18 -- # local node= 00:05:18.397 12:49:59 -- setup/common.sh@19 -- # local var val 00:05:18.397 12:49:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.397 12:49:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.397 12:49:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.397 12:49:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.397 12:49:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.397 12:49:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6748896 kB' 'MemAvailable: 9381848 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 495188 kB' 'Inactive: 2457776 kB' 'Active(anon): 126020 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117100 kB' 'Mapped: 49996 kB' 'Shmem: 10488 kB' 'KReclaimable: 85912 kB' 'Slab: 187624 kB' 'SReclaimable: 85912 kB' 'SUnreclaim: 101712 kB' 'KernelStack: 6656 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.397 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.397 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.398 12:49:59 -- setup/common.sh@33 -- # echo 0 00:05:18.398 12:49:59 -- setup/common.sh@33 -- # return 0 00:05:18.398 nr_hugepages=1024 00:05:18.398 resv_hugepages=0 00:05:18.398 surplus_hugepages=0 00:05:18.398 anon_hugepages=0 00:05:18.398 12:49:59 -- setup/hugepages.sh@100 -- # resv=0 00:05:18.398 12:49:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.398 12:49:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.398 12:49:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.398 12:49:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.398 12:49:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.398 12:49:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.398 12:49:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.398 12:49:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.398 12:49:59 -- setup/common.sh@18 -- # local node= 00:05:18.398 12:49:59 -- setup/common.sh@19 -- # local var val 00:05:18.398 12:49:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.398 12:49:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.398 12:49:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.398 12:49:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.398 12:49:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.398 12:49:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6749652 kB' 'MemAvailable: 9382604 kB' 'Buffers: 2684 kB' 'Cached: 2834748 kB' 'SwapCached: 0 kB' 'Active: 494964 kB' 'Inactive: 2457776 kB' 'Active(anon): 125796 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116900 kB' 'Mapped: 49996 kB' 'Shmem: 10488 kB' 'KReclaimable: 85912 kB' 'Slab: 187608 kB' 'SReclaimable: 85912 kB' 'SUnreclaim: 101696 kB' 'KernelStack: 6592 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.398 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.398 12:49:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.399 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.399 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.400 12:49:59 -- setup/common.sh@33 -- # echo 1024 00:05:18.400 12:49:59 -- setup/common.sh@33 -- # return 0 00:05:18.400 12:49:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.400 12:49:59 -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.400 12:49:59 -- setup/hugepages.sh@27 -- # local node 00:05:18.400 12:49:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.400 12:49:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.400 12:49:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.400 12:49:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.400 12:49:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.400 12:49:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.400 12:49:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.400 12:49:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.400 12:49:59 -- setup/common.sh@18 -- # local node=0 00:05:18.400 12:49:59 -- setup/common.sh@19 -- # local var val 00:05:18.400 12:49:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.400 12:49:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.400 12:49:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.400 12:49:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.400 12:49:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.400 12:49:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6749652 kB' 'MemUsed: 5489464 kB' 'SwapCached: 0 kB' 'Active: 494940 kB' 'Inactive: 2457776 kB' 'Active(anon): 125772 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2457776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2837432 kB' 'Mapped: 49996 kB' 'AnonPages: 116856 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85912 kB' 'Slab: 187612 kB' 'SReclaimable: 85912 kB' 'SUnreclaim: 101700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.400 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.400 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # continue 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.401 12:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.401 12:49:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.401 12:49:59 -- setup/common.sh@33 -- # echo 0 00:05:18.401 12:49:59 -- setup/common.sh@33 -- # return 0 00:05:18.401 12:49:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.401 12:49:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.401 12:49:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.401 12:49:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.401 12:49:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.401 node0=1024 expecting 1024 00:05:18.401 12:49:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.401 00:05:18.401 real 0m1.158s 00:05:18.401 user 0m0.566s 00:05:18.401 sys 0m0.635s 00:05:18.401 ************************************ 00:05:18.401 END TEST no_shrink_alloc 00:05:18.401 ************************************ 00:05:18.401 12:49:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.401 12:49:59 -- common/autotest_common.sh@10 -- # set +x 00:05:18.660 12:49:59 -- setup/hugepages.sh@217 -- # clear_hp 00:05:18.660 12:49:59 -- setup/hugepages.sh@37 -- # local node hp 00:05:18.660 12:49:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.660 12:49:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.660 12:49:59 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.660 12:49:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.660 12:49:59 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.660 12:49:59 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:18.660 12:49:59 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:18.660 ************************************ 00:05:18.660 END TEST hugepages 00:05:18.660 ************************************ 00:05:18.660 00:05:18.660 real 0m4.987s 00:05:18.660 user 0m2.410s 00:05:18.660 sys 0m2.656s 00:05:18.660 12:49:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.660 12:49:59 -- common/autotest_common.sh@10 -- # set +x 00:05:18.660 12:49:59 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:18.660 12:49:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.660 12:49:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.660 12:49:59 -- common/autotest_common.sh@10 -- # set +x 00:05:18.660 ************************************ 00:05:18.660 START TEST driver 00:05:18.660 ************************************ 00:05:18.660 12:49:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:18.660 * Looking for test storage... 00:05:18.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:18.660 12:49:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:18.660 12:49:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:18.660 12:49:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:18.660 12:49:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:18.660 12:49:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:18.660 12:49:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:18.660 12:49:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:18.660 12:49:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:18.660 12:49:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:18.660 12:49:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.660 12:49:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:18.660 12:49:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:18.660 12:49:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:18.660 12:49:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:18.660 12:49:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:18.660 12:49:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:18.660 12:49:59 -- scripts/common.sh@344 -- # : 1 00:05:18.660 12:49:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:18.660 12:49:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.660 12:49:59 -- scripts/common.sh@364 -- # decimal 1 00:05:18.660 12:49:59 -- scripts/common.sh@352 -- # local d=1 00:05:18.660 12:49:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.660 12:49:59 -- scripts/common.sh@354 -- # echo 1 00:05:18.660 12:49:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:18.660 12:49:59 -- scripts/common.sh@365 -- # decimal 2 00:05:18.660 12:49:59 -- scripts/common.sh@352 -- # local d=2 00:05:18.660 12:49:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.660 12:49:59 -- scripts/common.sh@354 -- # echo 2 00:05:18.660 12:49:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:18.660 12:49:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:18.660 12:49:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:18.660 12:49:59 -- scripts/common.sh@367 -- # return 0 00:05:18.660 12:49:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.660 12:49:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:18.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.660 --rc genhtml_branch_coverage=1 00:05:18.660 --rc genhtml_function_coverage=1 00:05:18.660 --rc genhtml_legend=1 00:05:18.660 --rc geninfo_all_blocks=1 00:05:18.660 --rc geninfo_unexecuted_blocks=1 00:05:18.660 00:05:18.660 ' 00:05:18.660 12:49:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:18.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.661 --rc genhtml_branch_coverage=1 00:05:18.661 --rc genhtml_function_coverage=1 00:05:18.661 --rc genhtml_legend=1 00:05:18.661 --rc geninfo_all_blocks=1 00:05:18.661 --rc geninfo_unexecuted_blocks=1 00:05:18.661 00:05:18.661 ' 00:05:18.661 12:49:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:18.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.661 --rc genhtml_branch_coverage=1 00:05:18.661 --rc genhtml_function_coverage=1 00:05:18.661 --rc genhtml_legend=1 00:05:18.661 --rc geninfo_all_blocks=1 00:05:18.661 --rc geninfo_unexecuted_blocks=1 00:05:18.661 00:05:18.661 ' 00:05:18.661 12:49:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:18.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.661 --rc genhtml_branch_coverage=1 00:05:18.661 --rc genhtml_function_coverage=1 00:05:18.661 --rc genhtml_legend=1 00:05:18.661 --rc geninfo_all_blocks=1 00:05:18.661 --rc geninfo_unexecuted_blocks=1 00:05:18.661 00:05:18.661 ' 00:05:18.661 12:49:59 -- setup/driver.sh@68 -- # setup reset 00:05:18.661 12:49:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.661 12:49:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.228 12:49:59 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:19.228 12:49:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.228 12:49:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.228 12:49:59 -- common/autotest_common.sh@10 -- # set +x 00:05:19.228 ************************************ 00:05:19.228 START TEST guess_driver 00:05:19.228 ************************************ 00:05:19.228 12:49:59 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:19.228 12:49:59 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:19.228 12:49:59 -- setup/driver.sh@47 -- # local fail=0 00:05:19.228 12:49:59 -- setup/driver.sh@49 -- # pick_driver 00:05:19.228 12:49:59 -- setup/driver.sh@36 -- # vfio 00:05:19.228 12:49:59 -- setup/driver.sh@21 -- # local iommu_grups 00:05:19.228 12:49:59 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:19.228 12:50:00 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:19.228 12:50:00 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:19.228 12:50:00 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:19.228 12:50:00 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:19.228 12:50:00 -- setup/driver.sh@32 -- # return 1 00:05:19.228 12:50:00 -- setup/driver.sh@38 -- # uio 00:05:19.228 12:50:00 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:19.228 12:50:00 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:19.228 12:50:00 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:19.228 12:50:00 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:19.486 12:50:00 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:19.486 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:19.486 12:50:00 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:19.486 Looking for driver=uio_pci_generic 00:05:19.486 12:50:00 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:19.486 12:50:00 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:19.486 12:50:00 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:19.486 12:50:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.486 12:50:00 -- setup/driver.sh@45 -- # setup output config 00:05:19.486 12:50:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.486 12:50:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.053 12:50:00 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:20.053 12:50:00 -- setup/driver.sh@58 -- # continue 00:05:20.053 12:50:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.053 12:50:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.053 12:50:00 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:20.053 12:50:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.312 12:50:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.312 12:50:00 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:20.312 12:50:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.312 12:50:00 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:20.312 12:50:00 -- setup/driver.sh@65 -- # setup reset 00:05:20.312 12:50:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.312 12:50:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.878 00:05:20.878 real 0m1.461s 00:05:20.878 user 0m0.585s 00:05:20.878 sys 0m0.872s 00:05:20.878 12:50:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.878 ************************************ 00:05:20.878 12:50:01 -- common/autotest_common.sh@10 -- # set +x 00:05:20.878 END TEST guess_driver 00:05:20.878 ************************************ 00:05:20.878 ************************************ 00:05:20.878 END TEST driver 00:05:20.878 ************************************ 00:05:20.878 00:05:20.878 real 0m2.262s 00:05:20.878 user 0m0.906s 00:05:20.878 sys 0m1.414s 00:05:20.878 12:50:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.878 12:50:01 -- common/autotest_common.sh@10 -- # set +x 00:05:20.878 12:50:01 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:20.878 12:50:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.878 12:50:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.878 12:50:01 -- common/autotest_common.sh@10 -- # set +x 00:05:20.878 ************************************ 00:05:20.878 START TEST devices 00:05:20.878 ************************************ 00:05:20.878 12:50:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:20.878 * Looking for test storage... 00:05:20.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:20.878 12:50:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:20.878 12:50:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:20.878 12:50:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.137 12:50:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.137 12:50:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.137 12:50:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.137 12:50:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.137 12:50:01 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.137 12:50:01 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.137 12:50:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.137 12:50:01 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.137 12:50:01 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.137 12:50:01 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.137 12:50:01 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.137 12:50:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.137 12:50:01 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.137 12:50:01 -- scripts/common.sh@344 -- # : 1 00:05:21.137 12:50:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.137 12:50:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.137 12:50:01 -- scripts/common.sh@364 -- # decimal 1 00:05:21.137 12:50:01 -- scripts/common.sh@352 -- # local d=1 00:05:21.137 12:50:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.137 12:50:01 -- scripts/common.sh@354 -- # echo 1 00:05:21.137 12:50:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.137 12:50:01 -- scripts/common.sh@365 -- # decimal 2 00:05:21.137 12:50:01 -- scripts/common.sh@352 -- # local d=2 00:05:21.137 12:50:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.137 12:50:01 -- scripts/common.sh@354 -- # echo 2 00:05:21.137 12:50:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.137 12:50:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.137 12:50:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.137 12:50:01 -- scripts/common.sh@367 -- # return 0 00:05:21.137 12:50:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.137 12:50:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.137 --rc genhtml_branch_coverage=1 00:05:21.137 --rc genhtml_function_coverage=1 00:05:21.137 --rc genhtml_legend=1 00:05:21.137 --rc geninfo_all_blocks=1 00:05:21.137 --rc geninfo_unexecuted_blocks=1 00:05:21.137 00:05:21.137 ' 00:05:21.137 12:50:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.137 --rc genhtml_branch_coverage=1 00:05:21.137 --rc genhtml_function_coverage=1 00:05:21.137 --rc genhtml_legend=1 00:05:21.137 --rc geninfo_all_blocks=1 00:05:21.137 --rc geninfo_unexecuted_blocks=1 00:05:21.137 00:05:21.137 ' 00:05:21.137 12:50:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.137 --rc genhtml_branch_coverage=1 00:05:21.137 --rc genhtml_function_coverage=1 00:05:21.137 --rc genhtml_legend=1 00:05:21.137 --rc geninfo_all_blocks=1 00:05:21.137 --rc geninfo_unexecuted_blocks=1 00:05:21.137 00:05:21.137 ' 00:05:21.137 12:50:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.137 --rc genhtml_branch_coverage=1 00:05:21.137 --rc genhtml_function_coverage=1 00:05:21.137 --rc genhtml_legend=1 00:05:21.137 --rc geninfo_all_blocks=1 00:05:21.137 --rc geninfo_unexecuted_blocks=1 00:05:21.137 00:05:21.137 ' 00:05:21.137 12:50:01 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:21.137 12:50:01 -- setup/devices.sh@192 -- # setup reset 00:05:21.137 12:50:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.137 12:50:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.705 12:50:02 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:21.705 12:50:02 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:21.705 12:50:02 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:21.705 12:50:02 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:21.705 12:50:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:21.705 12:50:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:21.705 12:50:02 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:21.705 12:50:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.705 12:50:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:21.705 12:50:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:21.705 12:50:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:21.705 12:50:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:21.705 12:50:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:21.705 12:50:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:21.705 12:50:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:21.705 12:50:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:21.705 12:50:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:21.705 12:50:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:21.705 12:50:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:21.705 12:50:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:21.705 12:50:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:21.705 12:50:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:21.705 12:50:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:21.705 12:50:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:21.705 12:50:02 -- setup/devices.sh@196 -- # blocks=() 00:05:21.963 12:50:02 -- setup/devices.sh@196 -- # declare -a blocks 00:05:21.964 12:50:02 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:21.964 12:50:02 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:21.964 12:50:02 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:21.964 12:50:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.964 12:50:02 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:21.964 12:50:02 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:21.964 12:50:02 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:21.964 12:50:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:21.964 12:50:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:21.964 12:50:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:21.964 No valid GPT data, bailing 00:05:21.964 12:50:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:21.964 12:50:02 -- scripts/common.sh@393 -- # pt= 00:05:21.964 12:50:02 -- scripts/common.sh@394 -- # return 1 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:21.964 12:50:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:21.964 12:50:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:21.964 12:50:02 -- setup/common.sh@80 -- # echo 5368709120 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:21.964 12:50:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.964 12:50:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:21.964 12:50:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.964 12:50:02 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:21.964 12:50:02 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:21.964 12:50:02 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:21.964 12:50:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:21.964 12:50:02 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:21.964 12:50:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:21.964 No valid GPT data, bailing 00:05:21.964 12:50:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:21.964 12:50:02 -- scripts/common.sh@393 -- # pt= 00:05:21.964 12:50:02 -- scripts/common.sh@394 -- # return 1 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:21.964 12:50:02 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:21.964 12:50:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:21.964 12:50:02 -- setup/common.sh@80 -- # echo 4294967296 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:21.964 12:50:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.964 12:50:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:21.964 12:50:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.964 12:50:02 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:21.964 12:50:02 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:21.964 12:50:02 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:21.964 12:50:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:21.964 12:50:02 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:21.964 12:50:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:21.964 No valid GPT data, bailing 00:05:21.964 12:50:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:21.964 12:50:02 -- scripts/common.sh@393 -- # pt= 00:05:21.964 12:50:02 -- scripts/common.sh@394 -- # return 1 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:21.964 12:50:02 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:21.964 12:50:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:21.964 12:50:02 -- setup/common.sh@80 -- # echo 4294967296 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:21.964 12:50:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.964 12:50:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:21.964 12:50:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.964 12:50:02 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:21.964 12:50:02 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:21.964 12:50:02 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:21.964 12:50:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:21.964 12:50:02 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:21.964 12:50:02 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:21.964 12:50:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:22.223 No valid GPT data, bailing 00:05:22.223 12:50:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:22.223 12:50:02 -- scripts/common.sh@393 -- # pt= 00:05:22.223 12:50:02 -- scripts/common.sh@394 -- # return 1 00:05:22.223 12:50:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:22.223 12:50:02 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:22.223 12:50:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:22.223 12:50:02 -- setup/common.sh@80 -- # echo 4294967296 00:05:22.223 12:50:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:22.223 12:50:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.223 12:50:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:22.223 12:50:02 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:22.223 12:50:02 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:22.223 12:50:02 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:22.223 12:50:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.223 12:50:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.223 12:50:02 -- common/autotest_common.sh@10 -- # set +x 00:05:22.223 ************************************ 00:05:22.223 START TEST nvme_mount 00:05:22.223 ************************************ 00:05:22.223 12:50:02 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:22.223 12:50:02 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:22.223 12:50:02 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:22.223 12:50:02 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.223 12:50:02 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.223 12:50:02 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:22.223 12:50:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:22.223 12:50:02 -- setup/common.sh@40 -- # local part_no=1 00:05:22.223 12:50:02 -- setup/common.sh@41 -- # local size=1073741824 00:05:22.223 12:50:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:22.223 12:50:02 -- setup/common.sh@44 -- # parts=() 00:05:22.223 12:50:02 -- setup/common.sh@44 -- # local parts 00:05:22.223 12:50:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:22.223 12:50:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.223 12:50:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.223 12:50:02 -- setup/common.sh@46 -- # (( part++ )) 00:05:22.223 12:50:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.223 12:50:02 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:22.223 12:50:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:22.223 12:50:02 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:23.159 Creating new GPT entries in memory. 00:05:23.159 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:23.159 other utilities. 00:05:23.159 12:50:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:23.159 12:50:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.159 12:50:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:23.159 12:50:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:23.159 12:50:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:24.533 Creating new GPT entries in memory. 00:05:24.533 The operation has completed successfully. 00:05:24.533 12:50:04 -- setup/common.sh@57 -- # (( part++ )) 00:05:24.533 12:50:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.533 12:50:04 -- setup/common.sh@62 -- # wait 65535 00:05:24.533 12:50:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.533 12:50:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:24.533 12:50:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.533 12:50:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:24.533 12:50:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:24.533 12:50:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.533 12:50:04 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.533 12:50:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:24.533 12:50:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:24.533 12:50:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.533 12:50:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.533 12:50:04 -- setup/devices.sh@53 -- # local found=0 00:05:24.533 12:50:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.533 12:50:04 -- setup/devices.sh@56 -- # : 00:05:24.533 12:50:04 -- setup/devices.sh@59 -- # local pci status 00:05:24.533 12:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.533 12:50:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:24.533 12:50:04 -- setup/devices.sh@47 -- # setup output config 00:05:24.533 12:50:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.533 12:50:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:24.533 12:50:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:24.533 12:50:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:24.533 12:50:05 -- setup/devices.sh@63 -- # found=1 00:05:24.533 12:50:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.533 12:50:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:24.533 12:50:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.790 12:50:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:24.790 12:50:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.790 12:50:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:24.790 12:50:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.048 12:50:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.048 12:50:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:25.048 12:50:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.048 12:50:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.048 12:50:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.048 12:50:05 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:25.048 12:50:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.048 12:50:05 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.048 12:50:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.048 12:50:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:25.048 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.048 12:50:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.048 12:50:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.307 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:25.307 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:25.307 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:25.307 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:25.307 12:50:05 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:25.307 12:50:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:25.307 12:50:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.307 12:50:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:25.307 12:50:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:25.307 12:50:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.307 12:50:05 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.307 12:50:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:25.307 12:50:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:25.307 12:50:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.307 12:50:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.307 12:50:05 -- setup/devices.sh@53 -- # local found=0 00:05:25.307 12:50:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.307 12:50:05 -- setup/devices.sh@56 -- # : 00:05:25.307 12:50:05 -- setup/devices.sh@59 -- # local pci status 00:05:25.307 12:50:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.307 12:50:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:25.307 12:50:05 -- setup/devices.sh@47 -- # setup output config 00:05:25.307 12:50:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.307 12:50:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.566 12:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.566 12:50:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:25.566 12:50:06 -- setup/devices.sh@63 -- # found=1 00:05:25.566 12:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.566 12:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.566 12:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.825 12:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.825 12:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.825 12:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.825 12:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.825 12:50:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.825 12:50:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:25.825 12:50:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.825 12:50:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.825 12:50:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.825 12:50:06 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.825 12:50:06 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:25.825 12:50:06 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:25.825 12:50:06 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:25.825 12:50:06 -- setup/devices.sh@50 -- # local mount_point= 00:05:25.825 12:50:06 -- setup/devices.sh@51 -- # local test_file= 00:05:25.825 12:50:06 -- setup/devices.sh@53 -- # local found=0 00:05:25.825 12:50:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:25.825 12:50:06 -- setup/devices.sh@59 -- # local pci status 00:05:25.825 12:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.825 12:50:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:25.825 12:50:06 -- setup/devices.sh@47 -- # setup output config 00:05:25.825 12:50:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.825 12:50:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.083 12:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.083 12:50:06 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:26.083 12:50:06 -- setup/devices.sh@63 -- # found=1 00:05:26.083 12:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.083 12:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.083 12:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.650 12:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.650 12:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.650 12:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.650 12:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.650 12:50:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.650 12:50:07 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:26.650 12:50:07 -- setup/devices.sh@68 -- # return 0 00:05:26.650 12:50:07 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:26.650 12:50:07 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.650 12:50:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.650 12:50:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.650 12:50:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.650 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.650 00:05:26.650 real 0m4.495s 00:05:26.650 user 0m1.023s 00:05:26.650 sys 0m1.158s 00:05:26.650 12:50:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.650 12:50:07 -- common/autotest_common.sh@10 -- # set +x 00:05:26.650 ************************************ 00:05:26.650 END TEST nvme_mount 00:05:26.650 ************************************ 00:05:26.650 12:50:07 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:26.650 12:50:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.650 12:50:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.650 12:50:07 -- common/autotest_common.sh@10 -- # set +x 00:05:26.650 ************************************ 00:05:26.650 START TEST dm_mount 00:05:26.650 ************************************ 00:05:26.650 12:50:07 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:26.650 12:50:07 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:26.650 12:50:07 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:26.650 12:50:07 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:26.650 12:50:07 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:26.650 12:50:07 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:26.650 12:50:07 -- setup/common.sh@40 -- # local part_no=2 00:05:26.650 12:50:07 -- setup/common.sh@41 -- # local size=1073741824 00:05:26.650 12:50:07 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:26.650 12:50:07 -- setup/common.sh@44 -- # parts=() 00:05:26.650 12:50:07 -- setup/common.sh@44 -- # local parts 00:05:26.650 12:50:07 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:26.650 12:50:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.650 12:50:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.650 12:50:07 -- setup/common.sh@46 -- # (( part++ )) 00:05:26.650 12:50:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.651 12:50:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.651 12:50:07 -- setup/common.sh@46 -- # (( part++ )) 00:05:26.651 12:50:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.651 12:50:07 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:26.651 12:50:07 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:26.651 12:50:07 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:28.057 Creating new GPT entries in memory. 00:05:28.057 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:28.057 other utilities. 00:05:28.057 12:50:08 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:28.057 12:50:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.057 12:50:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.057 12:50:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.057 12:50:08 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:28.993 Creating new GPT entries in memory. 00:05:28.993 The operation has completed successfully. 00:05:28.993 12:50:09 -- setup/common.sh@57 -- # (( part++ )) 00:05:28.993 12:50:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.993 12:50:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.993 12:50:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.993 12:50:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:29.928 The operation has completed successfully. 00:05:29.928 12:50:10 -- setup/common.sh@57 -- # (( part++ )) 00:05:29.928 12:50:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.928 12:50:10 -- setup/common.sh@62 -- # wait 65995 00:05:29.928 12:50:10 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:29.928 12:50:10 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.928 12:50:10 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.928 12:50:10 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:29.928 12:50:10 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:29.928 12:50:10 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.928 12:50:10 -- setup/devices.sh@161 -- # break 00:05:29.928 12:50:10 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.928 12:50:10 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:29.928 12:50:10 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:29.928 12:50:10 -- setup/devices.sh@166 -- # dm=dm-0 00:05:29.928 12:50:10 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:29.928 12:50:10 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:29.928 12:50:10 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.928 12:50:10 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:29.928 12:50:10 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.928 12:50:10 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.928 12:50:10 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:29.928 12:50:10 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.928 12:50:10 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.928 12:50:10 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:29.928 12:50:10 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:29.928 12:50:10 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.928 12:50:10 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.928 12:50:10 -- setup/devices.sh@53 -- # local found=0 00:05:29.928 12:50:10 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:29.928 12:50:10 -- setup/devices.sh@56 -- # : 00:05:29.928 12:50:10 -- setup/devices.sh@59 -- # local pci status 00:05:29.928 12:50:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:29.928 12:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.928 12:50:10 -- setup/devices.sh@47 -- # setup output config 00:05:29.928 12:50:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.928 12:50:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.187 12:50:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.187 12:50:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:30.187 12:50:10 -- setup/devices.sh@63 -- # found=1 00:05:30.187 12:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.187 12:50:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.187 12:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.446 12:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.446 12:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.446 12:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.446 12:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.446 12:50:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.446 12:50:11 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:30.446 12:50:11 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.446 12:50:11 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.446 12:50:11 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.446 12:50:11 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.446 12:50:11 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:30.446 12:50:11 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:30.446 12:50:11 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:30.446 12:50:11 -- setup/devices.sh@50 -- # local mount_point= 00:05:30.446 12:50:11 -- setup/devices.sh@51 -- # local test_file= 00:05:30.446 12:50:11 -- setup/devices.sh@53 -- # local found=0 00:05:30.446 12:50:11 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:30.446 12:50:11 -- setup/devices.sh@59 -- # local pci status 00:05:30.446 12:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.446 12:50:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:30.446 12:50:11 -- setup/devices.sh@47 -- # setup output config 00:05:30.446 12:50:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.446 12:50:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.704 12:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.704 12:50:11 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:30.704 12:50:11 -- setup/devices.sh@63 -- # found=1 00:05:30.704 12:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.704 12:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.704 12:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.963 12:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.963 12:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.220 12:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.220 12:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.220 12:50:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.220 12:50:11 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:31.220 12:50:11 -- setup/devices.sh@68 -- # return 0 00:05:31.220 12:50:11 -- setup/devices.sh@187 -- # cleanup_dm 00:05:31.220 12:50:11 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.220 12:50:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.220 12:50:11 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:31.221 12:50:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.221 12:50:11 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:31.221 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:31.221 12:50:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.221 12:50:11 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:31.221 00:05:31.221 real 0m4.544s 00:05:31.221 user 0m0.673s 00:05:31.221 sys 0m0.796s 00:05:31.221 12:50:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.221 ************************************ 00:05:31.221 END TEST dm_mount 00:05:31.221 12:50:11 -- common/autotest_common.sh@10 -- # set +x 00:05:31.221 ************************************ 00:05:31.221 12:50:11 -- setup/devices.sh@1 -- # cleanup 00:05:31.221 12:50:11 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:31.221 12:50:11 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.221 12:50:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.221 12:50:11 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:31.221 12:50:11 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.221 12:50:11 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.479 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:31.479 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:31.479 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:31.479 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:31.479 12:50:12 -- setup/devices.sh@12 -- # cleanup_dm 00:05:31.479 12:50:12 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.479 12:50:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.479 12:50:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.479 12:50:12 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.479 12:50:12 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.479 12:50:12 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:31.479 00:05:31.479 real 0m10.711s 00:05:31.479 user 0m2.466s 00:05:31.479 sys 0m2.553s 00:05:31.479 12:50:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.737 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.737 ************************************ 00:05:31.737 END TEST devices 00:05:31.737 ************************************ 00:05:31.737 ************************************ 00:05:31.737 END TEST setup.sh 00:05:31.737 ************************************ 00:05:31.737 00:05:31.737 real 0m22.872s 00:05:31.737 user 0m7.953s 00:05:31.737 sys 0m9.339s 00:05:31.737 12:50:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.737 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.737 12:50:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:31.737 Hugepages 00:05:31.737 node hugesize free / total 00:05:31.737 node0 1048576kB 0 / 0 00:05:31.737 node0 2048kB 2048 / 2048 00:05:31.737 00:05:31.737 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:31.995 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:31.995 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:31.995 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:31.995 12:50:12 -- spdk/autotest.sh@128 -- # uname -s 00:05:31.995 12:50:12 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:31.995 12:50:12 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:31.995 12:50:12 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.819 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.819 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.819 12:50:13 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:34.195 12:50:14 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:34.195 12:50:14 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:34.195 12:50:14 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:34.195 12:50:14 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:34.196 12:50:14 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:34.196 12:50:14 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:34.196 12:50:14 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.196 12:50:14 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:34.196 12:50:14 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:34.196 12:50:14 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:34.196 12:50:14 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:34.196 12:50:14 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.196 Waiting for block devices as requested 00:05:34.453 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.453 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.453 12:50:15 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:34.453 12:50:15 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:34.453 12:50:15 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:34.453 12:50:15 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:34.453 12:50:15 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:34.453 12:50:15 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:34.453 12:50:15 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:34.453 12:50:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:34.453 12:50:15 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:34.453 12:50:15 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:34.453 12:50:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:34.453 12:50:15 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:34.453 12:50:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:34.453 12:50:15 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:34.453 12:50:15 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:34.453 12:50:15 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:34.453 12:50:15 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:34.453 12:50:15 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:34.453 12:50:15 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:34.453 12:50:15 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:34.453 12:50:15 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:34.453 12:50:15 -- common/autotest_common.sh@1552 -- # continue 00:05:34.453 12:50:15 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:34.453 12:50:15 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:34.453 12:50:15 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:34.453 12:50:15 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:34.453 12:50:15 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:34.453 12:50:15 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:34.453 12:50:15 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:34.453 12:50:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:34.453 12:50:15 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:34.453 12:50:15 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:34.453 12:50:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:34.453 12:50:15 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:34.453 12:50:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:34.453 12:50:15 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:34.453 12:50:15 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:34.453 12:50:15 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:34.453 12:50:15 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:34.453 12:50:15 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:34.453 12:50:15 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:34.711 12:50:15 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:34.711 12:50:15 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:34.711 12:50:15 -- common/autotest_common.sh@1552 -- # continue 00:05:34.711 12:50:15 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:34.711 12:50:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.711 12:50:15 -- common/autotest_common.sh@10 -- # set +x 00:05:34.711 12:50:15 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:34.711 12:50:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.711 12:50:15 -- common/autotest_common.sh@10 -- # set +x 00:05:34.711 12:50:15 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.279 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.538 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.538 12:50:16 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:35.538 12:50:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.538 12:50:16 -- common/autotest_common.sh@10 -- # set +x 00:05:35.538 12:50:16 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:35.538 12:50:16 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:35.538 12:50:16 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:35.538 12:50:16 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:35.538 12:50:16 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:35.538 12:50:16 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:35.538 12:50:16 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:35.538 12:50:16 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:35.538 12:50:16 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.538 12:50:16 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.538 12:50:16 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:35.538 12:50:16 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:35.538 12:50:16 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:35.538 12:50:16 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:35.538 12:50:16 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:35.538 12:50:16 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:35.538 12:50:16 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:35.538 12:50:16 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:35.538 12:50:16 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:35.538 12:50:16 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:35.538 12:50:16 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:35.538 12:50:16 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:35.538 12:50:16 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:35.538 12:50:16 -- common/autotest_common.sh@1588 -- # return 0 00:05:35.538 12:50:16 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:35.538 12:50:16 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:35.538 12:50:16 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:35.538 12:50:16 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:35.538 12:50:16 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:35.538 12:50:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.538 12:50:16 -- common/autotest_common.sh@10 -- # set +x 00:05:35.538 12:50:16 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:35.538 12:50:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.538 12:50:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.538 12:50:16 -- common/autotest_common.sh@10 -- # set +x 00:05:35.538 ************************************ 00:05:35.538 START TEST env 00:05:35.538 ************************************ 00:05:35.538 12:50:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:35.796 * Looking for test storage... 00:05:35.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:35.796 12:50:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:35.796 12:50:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:35.796 12:50:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:35.796 12:50:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:35.796 12:50:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:35.796 12:50:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:35.796 12:50:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:35.796 12:50:16 -- scripts/common.sh@335 -- # IFS=.-: 00:05:35.796 12:50:16 -- scripts/common.sh@335 -- # read -ra ver1 00:05:35.796 12:50:16 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.796 12:50:16 -- scripts/common.sh@336 -- # read -ra ver2 00:05:35.796 12:50:16 -- scripts/common.sh@337 -- # local 'op=<' 00:05:35.796 12:50:16 -- scripts/common.sh@339 -- # ver1_l=2 00:05:35.796 12:50:16 -- scripts/common.sh@340 -- # ver2_l=1 00:05:35.796 12:50:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:35.796 12:50:16 -- scripts/common.sh@343 -- # case "$op" in 00:05:35.796 12:50:16 -- scripts/common.sh@344 -- # : 1 00:05:35.796 12:50:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:35.797 12:50:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.797 12:50:16 -- scripts/common.sh@364 -- # decimal 1 00:05:35.797 12:50:16 -- scripts/common.sh@352 -- # local d=1 00:05:35.797 12:50:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.797 12:50:16 -- scripts/common.sh@354 -- # echo 1 00:05:35.797 12:50:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:35.797 12:50:16 -- scripts/common.sh@365 -- # decimal 2 00:05:35.797 12:50:16 -- scripts/common.sh@352 -- # local d=2 00:05:35.797 12:50:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.797 12:50:16 -- scripts/common.sh@354 -- # echo 2 00:05:35.797 12:50:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:35.797 12:50:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:35.797 12:50:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:35.797 12:50:16 -- scripts/common.sh@367 -- # return 0 00:05:35.797 12:50:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.797 12:50:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.797 --rc genhtml_branch_coverage=1 00:05:35.797 --rc genhtml_function_coverage=1 00:05:35.797 --rc genhtml_legend=1 00:05:35.797 --rc geninfo_all_blocks=1 00:05:35.797 --rc geninfo_unexecuted_blocks=1 00:05:35.797 00:05:35.797 ' 00:05:35.797 12:50:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.797 --rc genhtml_branch_coverage=1 00:05:35.797 --rc genhtml_function_coverage=1 00:05:35.797 --rc genhtml_legend=1 00:05:35.797 --rc geninfo_all_blocks=1 00:05:35.797 --rc geninfo_unexecuted_blocks=1 00:05:35.797 00:05:35.797 ' 00:05:35.797 12:50:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.797 --rc genhtml_branch_coverage=1 00:05:35.797 --rc genhtml_function_coverage=1 00:05:35.797 --rc genhtml_legend=1 00:05:35.797 --rc geninfo_all_blocks=1 00:05:35.797 --rc geninfo_unexecuted_blocks=1 00:05:35.797 00:05:35.797 ' 00:05:35.797 12:50:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.797 --rc genhtml_branch_coverage=1 00:05:35.797 --rc genhtml_function_coverage=1 00:05:35.797 --rc genhtml_legend=1 00:05:35.797 --rc geninfo_all_blocks=1 00:05:35.797 --rc geninfo_unexecuted_blocks=1 00:05:35.797 00:05:35.797 ' 00:05:35.797 12:50:16 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:35.797 12:50:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.797 12:50:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.797 12:50:16 -- common/autotest_common.sh@10 -- # set +x 00:05:35.797 ************************************ 00:05:35.797 START TEST env_memory 00:05:35.797 ************************************ 00:05:35.797 12:50:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:35.797 00:05:35.797 00:05:35.797 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.797 http://cunit.sourceforge.net/ 00:05:35.797 00:05:35.797 00:05:35.797 Suite: memory 00:05:35.797 Test: alloc and free memory map ...[2024-12-13 12:50:16.495360] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:35.797 passed 00:05:35.797 Test: mem map translation ...[2024-12-13 12:50:16.526520] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:35.797 [2024-12-13 12:50:16.526714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:35.797 [2024-12-13 12:50:16.526926] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:35.797 [2024-12-13 12:50:16.527159] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:36.056 passed 00:05:36.056 Test: mem map registration ...[2024-12-13 12:50:16.597118] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:36.056 [2024-12-13 12:50:16.597166] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:36.056 passed 00:05:36.056 Test: mem map adjacent registrations ...passed 00:05:36.056 00:05:36.056 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.056 suites 1 1 n/a 0 0 00:05:36.056 tests 4 4 4 0 0 00:05:36.056 asserts 152 152 152 0 n/a 00:05:36.056 00:05:36.056 Elapsed time = 0.218 seconds 00:05:36.056 00:05:36.056 real 0m0.236s 00:05:36.056 user 0m0.219s 00:05:36.056 sys 0m0.013s 00:05:36.056 12:50:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.056 ************************************ 00:05:36.056 12:50:16 -- common/autotest_common.sh@10 -- # set +x 00:05:36.056 END TEST env_memory 00:05:36.056 ************************************ 00:05:36.056 12:50:16 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:36.056 12:50:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.056 12:50:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.056 12:50:16 -- common/autotest_common.sh@10 -- # set +x 00:05:36.056 ************************************ 00:05:36.056 START TEST env_vtophys 00:05:36.056 ************************************ 00:05:36.056 12:50:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:36.056 EAL: lib.eal log level changed from notice to debug 00:05:36.056 EAL: Detected lcore 0 as core 0 on socket 0 00:05:36.056 EAL: Detected lcore 1 as core 0 on socket 0 00:05:36.056 EAL: Detected lcore 2 as core 0 on socket 0 00:05:36.056 EAL: Detected lcore 3 as core 0 on socket 0 00:05:36.056 EAL: Detected lcore 4 as core 0 on socket 0 00:05:36.056 EAL: Detected lcore 5 as core 0 on socket 0 00:05:36.056 EAL: Detected lcore 6 as core 0 on socket 0 00:05:36.056 EAL: Detected lcore 7 as core 0 on socket 0 00:05:36.056 EAL: Detected lcore 8 as core 0 on socket 0 00:05:36.056 EAL: Detected lcore 9 as core 0 on socket 0 00:05:36.056 EAL: Maximum logical cores by configuration: 128 00:05:36.056 EAL: Detected CPU lcores: 10 00:05:36.056 EAL: Detected NUMA nodes: 1 00:05:36.056 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:36.056 EAL: Detected shared linkage of DPDK 00:05:36.056 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:36.056 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:36.056 EAL: Registered [vdev] bus. 00:05:36.056 EAL: bus.vdev log level changed from disabled to notice 00:05:36.056 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:36.056 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:36.056 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:36.056 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:36.056 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:36.056 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:36.056 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:36.056 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:36.056 EAL: No shared files mode enabled, IPC will be disabled 00:05:36.056 EAL: No shared files mode enabled, IPC is disabled 00:05:36.056 EAL: Selected IOVA mode 'PA' 00:05:36.056 EAL: Probing VFIO support... 00:05:36.056 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:36.056 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:36.056 EAL: Ask a virtual area of 0x2e000 bytes 00:05:36.056 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:36.056 EAL: Setting up physically contiguous memory... 00:05:36.056 EAL: Setting maximum number of open files to 524288 00:05:36.056 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:36.056 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:36.056 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.056 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:36.056 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.056 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.056 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:36.056 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:36.056 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.056 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:36.056 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.056 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.056 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:36.056 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:36.056 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.056 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:36.056 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.056 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.056 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:36.056 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:36.056 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.056 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:36.056 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.056 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.056 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:36.056 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:36.056 EAL: Hugepages will be freed exactly as allocated. 00:05:36.056 EAL: No shared files mode enabled, IPC is disabled 00:05:36.056 EAL: No shared files mode enabled, IPC is disabled 00:05:36.315 EAL: TSC frequency is ~2200000 KHz 00:05:36.315 EAL: Main lcore 0 is ready (tid=7f40e0d1ca00;cpuset=[0]) 00:05:36.315 EAL: Trying to obtain current memory policy. 00:05:36.315 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.315 EAL: Restoring previous memory policy: 0 00:05:36.315 EAL: request: mp_malloc_sync 00:05:36.315 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was expanded by 2MB 00:05:36.316 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:36.316 EAL: Mem event callback 'spdk:(nil)' registered 00:05:36.316 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:36.316 00:05:36.316 00:05:36.316 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.316 http://cunit.sourceforge.net/ 00:05:36.316 00:05:36.316 00:05:36.316 Suite: components_suite 00:05:36.316 Test: vtophys_malloc_test ...passed 00:05:36.316 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:36.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.316 EAL: Restoring previous memory policy: 4 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was expanded by 4MB 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was shrunk by 4MB 00:05:36.316 EAL: Trying to obtain current memory policy. 00:05:36.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.316 EAL: Restoring previous memory policy: 4 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was expanded by 6MB 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was shrunk by 6MB 00:05:36.316 EAL: Trying to obtain current memory policy. 00:05:36.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.316 EAL: Restoring previous memory policy: 4 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was expanded by 10MB 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was shrunk by 10MB 00:05:36.316 EAL: Trying to obtain current memory policy. 00:05:36.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.316 EAL: Restoring previous memory policy: 4 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was expanded by 18MB 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was shrunk by 18MB 00:05:36.316 EAL: Trying to obtain current memory policy. 00:05:36.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.316 EAL: Restoring previous memory policy: 4 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was expanded by 34MB 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was shrunk by 34MB 00:05:36.316 EAL: Trying to obtain current memory policy. 00:05:36.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.316 EAL: Restoring previous memory policy: 4 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was expanded by 66MB 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was shrunk by 66MB 00:05:36.316 EAL: Trying to obtain current memory policy. 00:05:36.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.316 EAL: Restoring previous memory policy: 4 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was expanded by 130MB 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was shrunk by 130MB 00:05:36.316 EAL: Trying to obtain current memory policy. 00:05:36.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.316 EAL: Restoring previous memory policy: 4 00:05:36.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.316 EAL: request: mp_malloc_sync 00:05:36.316 EAL: No shared files mode enabled, IPC is disabled 00:05:36.316 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.575 EAL: request: mp_malloc_sync 00:05:36.575 EAL: No shared files mode enabled, IPC is disabled 00:05:36.575 EAL: Heap on socket 0 was shrunk by 258MB 00:05:36.575 EAL: Trying to obtain current memory policy. 00:05:36.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.575 EAL: Restoring previous memory policy: 4 00:05:36.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.575 EAL: request: mp_malloc_sync 00:05:36.575 EAL: No shared files mode enabled, IPC is disabled 00:05:36.575 EAL: Heap on socket 0 was expanded by 514MB 00:05:36.834 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.834 EAL: request: mp_malloc_sync 00:05:36.834 EAL: No shared files mode enabled, IPC is disabled 00:05:36.834 EAL: Heap on socket 0 was shrunk by 514MB 00:05:36.834 EAL: Trying to obtain current memory policy. 00:05:36.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.096 EAL: Restoring previous memory policy: 4 00:05:37.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.096 EAL: request: mp_malloc_sync 00:05:37.096 EAL: No shared files mode enabled, IPC is disabled 00:05:37.096 EAL: Heap on socket 0 was expanded by 1026MB 00:05:37.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.355 passed 00:05:37.355 00:05:37.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.355 suites 1 1 n/a 0 0 00:05:37.355 tests 2 2 2 0 0 00:05:37.355 asserts 5288 5288 5288 0 n/a 00:05:37.355 00:05:37.355 Elapsed time = 1.198 seconds 00:05:37.355 EAL: request: mp_malloc_sync 00:05:37.355 EAL: No shared files mode enabled, IPC is disabled 00:05:37.355 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:37.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.355 EAL: request: mp_malloc_sync 00:05:37.355 EAL: No shared files mode enabled, IPC is disabled 00:05:37.355 EAL: Heap on socket 0 was shrunk by 2MB 00:05:37.355 EAL: No shared files mode enabled, IPC is disabled 00:05:37.355 EAL: No shared files mode enabled, IPC is disabled 00:05:37.355 EAL: No shared files mode enabled, IPC is disabled 00:05:37.355 00:05:37.355 real 0m1.395s 00:05:37.355 user 0m0.761s 00:05:37.355 sys 0m0.499s 00:05:37.355 12:50:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.355 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.355 ************************************ 00:05:37.355 END TEST env_vtophys 00:05:37.355 ************************************ 00:05:37.613 12:50:18 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:37.613 12:50:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.613 12:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.613 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.613 ************************************ 00:05:37.613 START TEST env_pci 00:05:37.613 ************************************ 00:05:37.613 12:50:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:37.613 00:05:37.613 00:05:37.613 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.613 http://cunit.sourceforge.net/ 00:05:37.613 00:05:37.613 00:05:37.614 Suite: pci 00:05:37.614 Test: pci_hook ...[2024-12-13 12:50:18.190917] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67133 has claimed it 00:05:37.614 passed 00:05:37.614 00:05:37.614 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.614 suites 1 1 n/a 0 0 00:05:37.614 tests 1 1 1 0 0 00:05:37.614 asserts 25 25 25 0 n/a 00:05:37.614 00:05:37.614 Elapsed time = 0.002 seconds 00:05:37.614 EAL: Cannot find device (10000:00:01.0) 00:05:37.614 EAL: Failed to attach device on primary process 00:05:37.614 00:05:37.614 real 0m0.016s 00:05:37.614 user 0m0.009s 00:05:37.614 sys 0m0.006s 00:05:37.614 12:50:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.614 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.614 ************************************ 00:05:37.614 END TEST env_pci 00:05:37.614 ************************************ 00:05:37.614 12:50:18 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:37.614 12:50:18 -- env/env.sh@15 -- # uname 00:05:37.614 12:50:18 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:37.614 12:50:18 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:37.614 12:50:18 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.614 12:50:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:37.614 12:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.614 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.614 ************************************ 00:05:37.614 START TEST env_dpdk_post_init 00:05:37.614 ************************************ 00:05:37.614 12:50:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.614 EAL: Detected CPU lcores: 10 00:05:37.614 EAL: Detected NUMA nodes: 1 00:05:37.614 EAL: Detected shared linkage of DPDK 00:05:37.614 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.614 EAL: Selected IOVA mode 'PA' 00:05:37.614 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.872 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:37.872 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:37.872 Starting DPDK initialization... 00:05:37.872 Starting SPDK post initialization... 00:05:37.872 SPDK NVMe probe 00:05:37.872 Attaching to 0000:00:06.0 00:05:37.872 Attaching to 0000:00:07.0 00:05:37.872 Attached to 0000:00:06.0 00:05:37.872 Attached to 0000:00:07.0 00:05:37.872 Cleaning up... 00:05:37.872 00:05:37.872 real 0m0.168s 00:05:37.872 user 0m0.035s 00:05:37.872 sys 0m0.035s 00:05:37.872 12:50:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.872 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.872 ************************************ 00:05:37.872 END TEST env_dpdk_post_init 00:05:37.872 ************************************ 00:05:37.872 12:50:18 -- env/env.sh@26 -- # uname 00:05:37.872 12:50:18 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:37.872 12:50:18 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.872 12:50:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.872 12:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.872 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.872 ************************************ 00:05:37.872 START TEST env_mem_callbacks 00:05:37.872 ************************************ 00:05:37.872 12:50:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.872 EAL: Detected CPU lcores: 10 00:05:37.872 EAL: Detected NUMA nodes: 1 00:05:37.872 EAL: Detected shared linkage of DPDK 00:05:37.872 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.872 EAL: Selected IOVA mode 'PA' 00:05:37.872 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.872 00:05:37.872 00:05:37.872 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.872 http://cunit.sourceforge.net/ 00:05:37.872 00:05:37.872 00:05:37.872 Suite: memory 00:05:37.872 Test: test ... 00:05:37.872 register 0x200000200000 2097152 00:05:37.872 malloc 3145728 00:05:37.872 register 0x200000400000 4194304 00:05:37.872 buf 0x200000500000 len 3145728 PASSED 00:05:37.872 malloc 64 00:05:37.872 buf 0x2000004fff40 len 64 PASSED 00:05:37.872 malloc 4194304 00:05:37.872 register 0x200000800000 6291456 00:05:37.872 buf 0x200000a00000 len 4194304 PASSED 00:05:37.872 free 0x200000500000 3145728 00:05:37.872 free 0x2000004fff40 64 00:05:37.872 unregister 0x200000400000 4194304 PASSED 00:05:37.872 free 0x200000a00000 4194304 00:05:37.872 unregister 0x200000800000 6291456 PASSED 00:05:37.872 malloc 8388608 00:05:37.872 register 0x200000400000 10485760 00:05:37.872 buf 0x200000600000 len 8388608 PASSED 00:05:37.872 free 0x200000600000 8388608 00:05:37.872 unregister 0x200000400000 10485760 PASSED 00:05:37.872 passed 00:05:37.872 00:05:37.872 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.872 suites 1 1 n/a 0 0 00:05:37.872 tests 1 1 1 0 0 00:05:37.872 asserts 15 15 15 0 n/a 00:05:37.872 00:05:37.872 Elapsed time = 0.008 seconds 00:05:37.872 00:05:37.872 real 0m0.137s 00:05:37.872 user 0m0.017s 00:05:37.872 sys 0m0.020s 00:05:37.872 12:50:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.872 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.872 ************************************ 00:05:37.872 END TEST env_mem_callbacks 00:05:37.872 ************************************ 00:05:38.131 00:05:38.131 real 0m2.392s 00:05:38.131 user 0m1.251s 00:05:38.131 sys 0m0.788s 00:05:38.131 12:50:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.131 ************************************ 00:05:38.131 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:38.131 END TEST env 00:05:38.131 ************************************ 00:05:38.131 12:50:18 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:38.131 12:50:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.131 12:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.131 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:38.131 ************************************ 00:05:38.131 START TEST rpc 00:05:38.131 ************************************ 00:05:38.131 12:50:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:38.131 * Looking for test storage... 00:05:38.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:38.131 12:50:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.131 12:50:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.131 12:50:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:38.131 12:50:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:38.131 12:50:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:38.131 12:50:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:38.131 12:50:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:38.131 12:50:18 -- scripts/common.sh@335 -- # IFS=.-: 00:05:38.131 12:50:18 -- scripts/common.sh@335 -- # read -ra ver1 00:05:38.131 12:50:18 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.131 12:50:18 -- scripts/common.sh@336 -- # read -ra ver2 00:05:38.131 12:50:18 -- scripts/common.sh@337 -- # local 'op=<' 00:05:38.131 12:50:18 -- scripts/common.sh@339 -- # ver1_l=2 00:05:38.131 12:50:18 -- scripts/common.sh@340 -- # ver2_l=1 00:05:38.131 12:50:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:38.131 12:50:18 -- scripts/common.sh@343 -- # case "$op" in 00:05:38.131 12:50:18 -- scripts/common.sh@344 -- # : 1 00:05:38.131 12:50:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:38.131 12:50:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.131 12:50:18 -- scripts/common.sh@364 -- # decimal 1 00:05:38.131 12:50:18 -- scripts/common.sh@352 -- # local d=1 00:05:38.131 12:50:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.131 12:50:18 -- scripts/common.sh@354 -- # echo 1 00:05:38.131 12:50:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:38.131 12:50:18 -- scripts/common.sh@365 -- # decimal 2 00:05:38.131 12:50:18 -- scripts/common.sh@352 -- # local d=2 00:05:38.131 12:50:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.131 12:50:18 -- scripts/common.sh@354 -- # echo 2 00:05:38.131 12:50:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:38.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.131 12:50:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:38.131 12:50:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:38.131 12:50:18 -- scripts/common.sh@367 -- # return 0 00:05:38.131 12:50:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.131 12:50:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.131 --rc genhtml_branch_coverage=1 00:05:38.131 --rc genhtml_function_coverage=1 00:05:38.131 --rc genhtml_legend=1 00:05:38.131 --rc geninfo_all_blocks=1 00:05:38.131 --rc geninfo_unexecuted_blocks=1 00:05:38.131 00:05:38.131 ' 00:05:38.131 12:50:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.131 --rc genhtml_branch_coverage=1 00:05:38.131 --rc genhtml_function_coverage=1 00:05:38.131 --rc genhtml_legend=1 00:05:38.131 --rc geninfo_all_blocks=1 00:05:38.131 --rc geninfo_unexecuted_blocks=1 00:05:38.131 00:05:38.131 ' 00:05:38.131 12:50:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.131 --rc genhtml_branch_coverage=1 00:05:38.131 --rc genhtml_function_coverage=1 00:05:38.131 --rc genhtml_legend=1 00:05:38.131 --rc geninfo_all_blocks=1 00:05:38.131 --rc geninfo_unexecuted_blocks=1 00:05:38.131 00:05:38.131 ' 00:05:38.131 12:50:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.131 --rc genhtml_branch_coverage=1 00:05:38.131 --rc genhtml_function_coverage=1 00:05:38.131 --rc genhtml_legend=1 00:05:38.131 --rc geninfo_all_blocks=1 00:05:38.131 --rc geninfo_unexecuted_blocks=1 00:05:38.131 00:05:38.131 ' 00:05:38.131 12:50:18 -- rpc/rpc.sh@65 -- # spdk_pid=67250 00:05:38.131 12:50:18 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.131 12:50:18 -- rpc/rpc.sh@67 -- # waitforlisten 67250 00:05:38.131 12:50:18 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:38.131 12:50:18 -- common/autotest_common.sh@829 -- # '[' -z 67250 ']' 00:05:38.131 12:50:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.131 12:50:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.131 12:50:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.131 12:50:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.131 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:38.390 [2024-12-13 12:50:18.951850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:38.390 [2024-12-13 12:50:18.951952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67250 ] 00:05:38.390 [2024-12-13 12:50:19.089271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.390 [2024-12-13 12:50:19.153489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.390 [2024-12-13 12:50:19.153619] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:38.390 [2024-12-13 12:50:19.153632] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67250' to capture a snapshot of events at runtime. 00:05:38.390 [2024-12-13 12:50:19.153639] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67250 for offline analysis/debug. 00:05:38.390 [2024-12-13 12:50:19.153680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.326 12:50:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.326 12:50:19 -- common/autotest_common.sh@862 -- # return 0 00:05:39.326 12:50:19 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.326 12:50:19 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.326 12:50:19 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:39.326 12:50:19 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:39.326 12:50:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.326 12:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.326 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:39.326 ************************************ 00:05:39.326 START TEST rpc_integrity 00:05:39.326 ************************************ 00:05:39.326 12:50:19 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:39.326 12:50:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.326 12:50:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.326 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:39.326 12:50:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.326 12:50:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.326 12:50:19 -- rpc/rpc.sh@13 -- # jq length 00:05:39.326 12:50:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.326 12:50:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.326 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.326 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.326 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.326 12:50:20 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:39.326 12:50:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.326 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.326 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.326 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.326 12:50:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.326 { 00:05:39.326 "aliases": [ 00:05:39.326 "ec63f6d8-7c7f-4d22-82af-86205e15aae1" 00:05:39.326 ], 00:05:39.326 "assigned_rate_limits": { 00:05:39.327 "r_mbytes_per_sec": 0, 00:05:39.327 "rw_ios_per_sec": 0, 00:05:39.327 "rw_mbytes_per_sec": 0, 00:05:39.327 "w_mbytes_per_sec": 0 00:05:39.327 }, 00:05:39.327 "block_size": 512, 00:05:39.327 "claimed": false, 00:05:39.327 "driver_specific": {}, 00:05:39.327 "memory_domains": [ 00:05:39.327 { 00:05:39.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.327 "dma_device_type": 2 00:05:39.327 } 00:05:39.327 ], 00:05:39.327 "name": "Malloc0", 00:05:39.327 "num_blocks": 16384, 00:05:39.327 "product_name": "Malloc disk", 00:05:39.327 "supported_io_types": { 00:05:39.327 "abort": true, 00:05:39.327 "compare": false, 00:05:39.327 "compare_and_write": false, 00:05:39.327 "flush": true, 00:05:39.327 "nvme_admin": false, 00:05:39.327 "nvme_io": false, 00:05:39.327 "read": true, 00:05:39.327 "reset": true, 00:05:39.327 "unmap": true, 00:05:39.327 "write": true, 00:05:39.327 "write_zeroes": true 00:05:39.327 }, 00:05:39.327 "uuid": "ec63f6d8-7c7f-4d22-82af-86205e15aae1", 00:05:39.327 "zoned": false 00:05:39.327 } 00:05:39.327 ]' 00:05:39.327 12:50:20 -- rpc/rpc.sh@17 -- # jq length 00:05:39.586 12:50:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.586 12:50:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:39.586 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.586 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 [2024-12-13 12:50:20.112812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:39.586 [2024-12-13 12:50:20.112871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.586 [2024-12-13 12:50:20.112887] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16be490 00:05:39.586 [2024-12-13 12:50:20.112895] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.586 [2024-12-13 12:50:20.114317] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.586 [2024-12-13 12:50:20.114365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.586 Passthru0 00:05:39.586 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.586 12:50:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.586 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.586 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.586 12:50:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.586 { 00:05:39.586 "aliases": [ 00:05:39.586 "ec63f6d8-7c7f-4d22-82af-86205e15aae1" 00:05:39.586 ], 00:05:39.586 "assigned_rate_limits": { 00:05:39.586 "r_mbytes_per_sec": 0, 00:05:39.586 "rw_ios_per_sec": 0, 00:05:39.586 "rw_mbytes_per_sec": 0, 00:05:39.586 "w_mbytes_per_sec": 0 00:05:39.586 }, 00:05:39.586 "block_size": 512, 00:05:39.586 "claim_type": "exclusive_write", 00:05:39.586 "claimed": true, 00:05:39.586 "driver_specific": {}, 00:05:39.586 "memory_domains": [ 00:05:39.586 { 00:05:39.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.586 "dma_device_type": 2 00:05:39.586 } 00:05:39.586 ], 00:05:39.586 "name": "Malloc0", 00:05:39.586 "num_blocks": 16384, 00:05:39.586 "product_name": "Malloc disk", 00:05:39.586 "supported_io_types": { 00:05:39.586 "abort": true, 00:05:39.586 "compare": false, 00:05:39.586 "compare_and_write": false, 00:05:39.586 "flush": true, 00:05:39.586 "nvme_admin": false, 00:05:39.586 "nvme_io": false, 00:05:39.586 "read": true, 00:05:39.586 "reset": true, 00:05:39.586 "unmap": true, 00:05:39.586 "write": true, 00:05:39.586 "write_zeroes": true 00:05:39.586 }, 00:05:39.586 "uuid": "ec63f6d8-7c7f-4d22-82af-86205e15aae1", 00:05:39.586 "zoned": false 00:05:39.586 }, 00:05:39.586 { 00:05:39.586 "aliases": [ 00:05:39.586 "b84d674c-0ab2-5ff7-ba95-b7317dcfec40" 00:05:39.586 ], 00:05:39.586 "assigned_rate_limits": { 00:05:39.586 "r_mbytes_per_sec": 0, 00:05:39.586 "rw_ios_per_sec": 0, 00:05:39.586 "rw_mbytes_per_sec": 0, 00:05:39.586 "w_mbytes_per_sec": 0 00:05:39.586 }, 00:05:39.586 "block_size": 512, 00:05:39.586 "claimed": false, 00:05:39.586 "driver_specific": { 00:05:39.586 "passthru": { 00:05:39.586 "base_bdev_name": "Malloc0", 00:05:39.586 "name": "Passthru0" 00:05:39.586 } 00:05:39.586 }, 00:05:39.586 "memory_domains": [ 00:05:39.586 { 00:05:39.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.586 "dma_device_type": 2 00:05:39.586 } 00:05:39.586 ], 00:05:39.586 "name": "Passthru0", 00:05:39.586 "num_blocks": 16384, 00:05:39.586 "product_name": "passthru", 00:05:39.586 "supported_io_types": { 00:05:39.586 "abort": true, 00:05:39.586 "compare": false, 00:05:39.586 "compare_and_write": false, 00:05:39.586 "flush": true, 00:05:39.586 "nvme_admin": false, 00:05:39.586 "nvme_io": false, 00:05:39.586 "read": true, 00:05:39.586 "reset": true, 00:05:39.586 "unmap": true, 00:05:39.586 "write": true, 00:05:39.586 "write_zeroes": true 00:05:39.586 }, 00:05:39.586 "uuid": "b84d674c-0ab2-5ff7-ba95-b7317dcfec40", 00:05:39.586 "zoned": false 00:05:39.586 } 00:05:39.586 ]' 00:05:39.586 12:50:20 -- rpc/rpc.sh@21 -- # jq length 00:05:39.586 12:50:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.586 12:50:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.586 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.586 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.586 12:50:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:39.586 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.586 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.586 12:50:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.586 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.586 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.586 12:50:20 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.586 12:50:20 -- rpc/rpc.sh@26 -- # jq length 00:05:39.586 ************************************ 00:05:39.586 END TEST rpc_integrity 00:05:39.586 ************************************ 00:05:39.586 12:50:20 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.586 00:05:39.586 real 0m0.298s 00:05:39.586 user 0m0.191s 00:05:39.586 sys 0m0.034s 00:05:39.586 12:50:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.586 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:50:20 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:39.586 12:50:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.586 12:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.586 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 ************************************ 00:05:39.586 START TEST rpc_plugins 00:05:39.586 ************************************ 00:05:39.586 12:50:20 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:39.586 12:50:20 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:39.586 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.586 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.586 12:50:20 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:39.586 12:50:20 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:39.586 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.586 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.586 12:50:20 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:39.586 { 00:05:39.586 "aliases": [ 00:05:39.586 "35ece150-7c94-444a-92f6-000c16248cd2" 00:05:39.586 ], 00:05:39.586 "assigned_rate_limits": { 00:05:39.586 "r_mbytes_per_sec": 0, 00:05:39.586 "rw_ios_per_sec": 0, 00:05:39.586 "rw_mbytes_per_sec": 0, 00:05:39.586 "w_mbytes_per_sec": 0 00:05:39.586 }, 00:05:39.586 "block_size": 4096, 00:05:39.586 "claimed": false, 00:05:39.586 "driver_specific": {}, 00:05:39.586 "memory_domains": [ 00:05:39.586 { 00:05:39.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.586 "dma_device_type": 2 00:05:39.586 } 00:05:39.586 ], 00:05:39.586 "name": "Malloc1", 00:05:39.586 "num_blocks": 256, 00:05:39.586 "product_name": "Malloc disk", 00:05:39.586 "supported_io_types": { 00:05:39.586 "abort": true, 00:05:39.586 "compare": false, 00:05:39.586 "compare_and_write": false, 00:05:39.586 "flush": true, 00:05:39.586 "nvme_admin": false, 00:05:39.586 "nvme_io": false, 00:05:39.586 "read": true, 00:05:39.586 "reset": true, 00:05:39.586 "unmap": true, 00:05:39.586 "write": true, 00:05:39.586 "write_zeroes": true 00:05:39.586 }, 00:05:39.586 "uuid": "35ece150-7c94-444a-92f6-000c16248cd2", 00:05:39.586 "zoned": false 00:05:39.586 } 00:05:39.586 ]' 00:05:39.586 12:50:20 -- rpc/rpc.sh@32 -- # jq length 00:05:39.845 12:50:20 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:39.845 12:50:20 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:39.845 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.845 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.845 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.845 12:50:20 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:39.845 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.845 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.845 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.845 12:50:20 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:39.845 12:50:20 -- rpc/rpc.sh@36 -- # jq length 00:05:39.845 ************************************ 00:05:39.845 END TEST rpc_plugins 00:05:39.845 ************************************ 00:05:39.845 12:50:20 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:39.845 00:05:39.845 real 0m0.139s 00:05:39.845 user 0m0.091s 00:05:39.845 sys 0m0.017s 00:05:39.845 12:50:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.845 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.845 12:50:20 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:39.845 12:50:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.845 12:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.845 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.845 ************************************ 00:05:39.845 START TEST rpc_trace_cmd_test 00:05:39.845 ************************************ 00:05:39.845 12:50:20 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:39.845 12:50:20 -- rpc/rpc.sh@40 -- # local info 00:05:39.845 12:50:20 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:39.845 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.845 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.845 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.845 12:50:20 -- rpc/rpc.sh@42 -- # info='{ 00:05:39.845 "bdev": { 00:05:39.845 "mask": "0x8", 00:05:39.845 "tpoint_mask": "0xffffffffffffffff" 00:05:39.845 }, 00:05:39.845 "bdev_nvme": { 00:05:39.845 "mask": "0x4000", 00:05:39.845 "tpoint_mask": "0x0" 00:05:39.845 }, 00:05:39.845 "blobfs": { 00:05:39.845 "mask": "0x80", 00:05:39.845 "tpoint_mask": "0x0" 00:05:39.845 }, 00:05:39.845 "dsa": { 00:05:39.845 "mask": "0x200", 00:05:39.845 "tpoint_mask": "0x0" 00:05:39.845 }, 00:05:39.845 "ftl": { 00:05:39.845 "mask": "0x40", 00:05:39.845 "tpoint_mask": "0x0" 00:05:39.846 }, 00:05:39.846 "iaa": { 00:05:39.846 "mask": "0x1000", 00:05:39.846 "tpoint_mask": "0x0" 00:05:39.846 }, 00:05:39.846 "iscsi_conn": { 00:05:39.846 "mask": "0x2", 00:05:39.846 "tpoint_mask": "0x0" 00:05:39.846 }, 00:05:39.846 "nvme_pcie": { 00:05:39.846 "mask": "0x800", 00:05:39.846 "tpoint_mask": "0x0" 00:05:39.846 }, 00:05:39.846 "nvme_tcp": { 00:05:39.846 "mask": "0x2000", 00:05:39.846 "tpoint_mask": "0x0" 00:05:39.846 }, 00:05:39.846 "nvmf_rdma": { 00:05:39.846 "mask": "0x10", 00:05:39.846 "tpoint_mask": "0x0" 00:05:39.846 }, 00:05:39.846 "nvmf_tcp": { 00:05:39.846 "mask": "0x20", 00:05:39.846 "tpoint_mask": "0x0" 00:05:39.846 }, 00:05:39.846 "scsi": { 00:05:39.846 "mask": "0x4", 00:05:39.846 "tpoint_mask": "0x0" 00:05:39.846 }, 00:05:39.846 "thread": { 00:05:39.846 "mask": "0x400", 00:05:39.846 "tpoint_mask": "0x0" 00:05:39.846 }, 00:05:39.846 "tpoint_group_mask": "0x8", 00:05:39.846 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67250" 00:05:39.846 }' 00:05:39.846 12:50:20 -- rpc/rpc.sh@43 -- # jq length 00:05:39.846 12:50:20 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:39.846 12:50:20 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:40.104 12:50:20 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:40.104 12:50:20 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:40.104 12:50:20 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:40.104 12:50:20 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:40.104 12:50:20 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:40.104 12:50:20 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:40.104 ************************************ 00:05:40.104 END TEST rpc_trace_cmd_test 00:05:40.104 ************************************ 00:05:40.104 12:50:20 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:40.104 00:05:40.104 real 0m0.270s 00:05:40.104 user 0m0.232s 00:05:40.104 sys 0m0.029s 00:05:40.105 12:50:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.105 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:40.105 12:50:20 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:40.105 12:50:20 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:40.105 12:50:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.105 12:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.105 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:40.105 ************************************ 00:05:40.105 START TEST go_rpc 00:05:40.105 ************************************ 00:05:40.105 12:50:20 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:40.105 12:50:20 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:40.105 12:50:20 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:40.105 12:50:20 -- rpc/rpc.sh@52 -- # jq length 00:05:40.363 12:50:20 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:40.363 12:50:20 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.363 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.363 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:40.363 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.363 12:50:20 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:40.363 12:50:20 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:40.363 12:50:20 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["871efe6f-5840-4f54-b541-7c38e340ab67"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"871efe6f-5840-4f54-b541-7c38e340ab67","zoned":false}]' 00:05:40.363 12:50:20 -- rpc/rpc.sh@57 -- # jq length 00:05:40.363 12:50:20 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:40.363 12:50:20 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:40.363 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.363 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:40.363 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.363 12:50:20 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:40.363 12:50:21 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:40.364 12:50:21 -- rpc/rpc.sh@61 -- # jq length 00:05:40.364 ************************************ 00:05:40.364 END TEST go_rpc 00:05:40.364 ************************************ 00:05:40.364 12:50:21 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:40.364 00:05:40.364 real 0m0.220s 00:05:40.364 user 0m0.151s 00:05:40.364 sys 0m0.037s 00:05:40.364 12:50:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.364 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.364 12:50:21 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:40.364 12:50:21 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:40.364 12:50:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.364 12:50:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.364 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.364 ************************************ 00:05:40.364 START TEST rpc_daemon_integrity 00:05:40.364 ************************************ 00:05:40.364 12:50:21 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:40.364 12:50:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.364 12:50:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.364 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.364 12:50:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.364 12:50:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.364 12:50:21 -- rpc/rpc.sh@13 -- # jq length 00:05:40.622 12:50:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.622 12:50:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.622 12:50:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.622 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.622 12:50:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.622 12:50:21 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:40.622 12:50:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.622 12:50:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.622 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.622 12:50:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.622 12:50:21 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.622 { 00:05:40.622 "aliases": [ 00:05:40.622 "7488fa8a-abdd-4cbf-88e1-ccf4fa40e19e" 00:05:40.622 ], 00:05:40.622 "assigned_rate_limits": { 00:05:40.622 "r_mbytes_per_sec": 0, 00:05:40.622 "rw_ios_per_sec": 0, 00:05:40.622 "rw_mbytes_per_sec": 0, 00:05:40.622 "w_mbytes_per_sec": 0 00:05:40.622 }, 00:05:40.622 "block_size": 512, 00:05:40.622 "claimed": false, 00:05:40.622 "driver_specific": {}, 00:05:40.622 "memory_domains": [ 00:05:40.622 { 00:05:40.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.622 "dma_device_type": 2 00:05:40.622 } 00:05:40.622 ], 00:05:40.622 "name": "Malloc3", 00:05:40.622 "num_blocks": 16384, 00:05:40.622 "product_name": "Malloc disk", 00:05:40.622 "supported_io_types": { 00:05:40.622 "abort": true, 00:05:40.622 "compare": false, 00:05:40.622 "compare_and_write": false, 00:05:40.622 "flush": true, 00:05:40.622 "nvme_admin": false, 00:05:40.622 "nvme_io": false, 00:05:40.622 "read": true, 00:05:40.622 "reset": true, 00:05:40.622 "unmap": true, 00:05:40.622 "write": true, 00:05:40.622 "write_zeroes": true 00:05:40.622 }, 00:05:40.622 "uuid": "7488fa8a-abdd-4cbf-88e1-ccf4fa40e19e", 00:05:40.622 "zoned": false 00:05:40.622 } 00:05:40.622 ]' 00:05:40.622 12:50:21 -- rpc/rpc.sh@17 -- # jq length 00:05:40.622 12:50:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.622 12:50:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:40.622 12:50:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.622 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.622 [2024-12-13 12:50:21.269307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:40.622 [2024-12-13 12:50:21.269374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.622 [2024-12-13 12:50:21.269388] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15111d0 00:05:40.622 [2024-12-13 12:50:21.269396] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.622 [2024-12-13 12:50:21.270609] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.622 [2024-12-13 12:50:21.270652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.622 Passthru0 00:05:40.622 12:50:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.622 12:50:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.622 12:50:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.622 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.622 12:50:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.622 12:50:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.622 { 00:05:40.622 "aliases": [ 00:05:40.622 "7488fa8a-abdd-4cbf-88e1-ccf4fa40e19e" 00:05:40.622 ], 00:05:40.622 "assigned_rate_limits": { 00:05:40.622 "r_mbytes_per_sec": 0, 00:05:40.622 "rw_ios_per_sec": 0, 00:05:40.622 "rw_mbytes_per_sec": 0, 00:05:40.622 "w_mbytes_per_sec": 0 00:05:40.622 }, 00:05:40.622 "block_size": 512, 00:05:40.622 "claim_type": "exclusive_write", 00:05:40.622 "claimed": true, 00:05:40.622 "driver_specific": {}, 00:05:40.622 "memory_domains": [ 00:05:40.622 { 00:05:40.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.622 "dma_device_type": 2 00:05:40.622 } 00:05:40.622 ], 00:05:40.622 "name": "Malloc3", 00:05:40.622 "num_blocks": 16384, 00:05:40.622 "product_name": "Malloc disk", 00:05:40.622 "supported_io_types": { 00:05:40.622 "abort": true, 00:05:40.622 "compare": false, 00:05:40.622 "compare_and_write": false, 00:05:40.622 "flush": true, 00:05:40.622 "nvme_admin": false, 00:05:40.622 "nvme_io": false, 00:05:40.622 "read": true, 00:05:40.622 "reset": true, 00:05:40.622 "unmap": true, 00:05:40.622 "write": true, 00:05:40.622 "write_zeroes": true 00:05:40.622 }, 00:05:40.622 "uuid": "7488fa8a-abdd-4cbf-88e1-ccf4fa40e19e", 00:05:40.622 "zoned": false 00:05:40.622 }, 00:05:40.622 { 00:05:40.622 "aliases": [ 00:05:40.622 "a9382cfb-57ed-5388-880b-62efa0f8a7c9" 00:05:40.622 ], 00:05:40.622 "assigned_rate_limits": { 00:05:40.622 "r_mbytes_per_sec": 0, 00:05:40.622 "rw_ios_per_sec": 0, 00:05:40.622 "rw_mbytes_per_sec": 0, 00:05:40.622 "w_mbytes_per_sec": 0 00:05:40.622 }, 00:05:40.622 "block_size": 512, 00:05:40.622 "claimed": false, 00:05:40.622 "driver_specific": { 00:05:40.622 "passthru": { 00:05:40.622 "base_bdev_name": "Malloc3", 00:05:40.622 "name": "Passthru0" 00:05:40.622 } 00:05:40.622 }, 00:05:40.622 "memory_domains": [ 00:05:40.622 { 00:05:40.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.622 "dma_device_type": 2 00:05:40.622 } 00:05:40.622 ], 00:05:40.622 "name": "Passthru0", 00:05:40.622 "num_blocks": 16384, 00:05:40.622 "product_name": "passthru", 00:05:40.622 "supported_io_types": { 00:05:40.622 "abort": true, 00:05:40.622 "compare": false, 00:05:40.622 "compare_and_write": false, 00:05:40.622 "flush": true, 00:05:40.622 "nvme_admin": false, 00:05:40.622 "nvme_io": false, 00:05:40.622 "read": true, 00:05:40.622 "reset": true, 00:05:40.622 "unmap": true, 00:05:40.622 "write": true, 00:05:40.622 "write_zeroes": true 00:05:40.622 }, 00:05:40.622 "uuid": "a9382cfb-57ed-5388-880b-62efa0f8a7c9", 00:05:40.622 "zoned": false 00:05:40.622 } 00:05:40.622 ]' 00:05:40.622 12:50:21 -- rpc/rpc.sh@21 -- # jq length 00:05:40.622 12:50:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.622 12:50:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.622 12:50:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.622 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.622 12:50:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.622 12:50:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:40.622 12:50:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.622 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.622 12:50:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.622 12:50:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.622 12:50:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.622 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.622 12:50:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.622 12:50:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.622 12:50:21 -- rpc/rpc.sh@26 -- # jq length 00:05:40.881 ************************************ 00:05:40.881 END TEST rpc_daemon_integrity 00:05:40.881 ************************************ 00:05:40.881 12:50:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.881 00:05:40.881 real 0m0.323s 00:05:40.881 user 0m0.220s 00:05:40.881 sys 0m0.036s 00:05:40.881 12:50:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.881 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.881 12:50:21 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:40.881 12:50:21 -- rpc/rpc.sh@84 -- # killprocess 67250 00:05:40.881 12:50:21 -- common/autotest_common.sh@936 -- # '[' -z 67250 ']' 00:05:40.881 12:50:21 -- common/autotest_common.sh@940 -- # kill -0 67250 00:05:40.881 12:50:21 -- common/autotest_common.sh@941 -- # uname 00:05:40.881 12:50:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.881 12:50:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67250 00:05:40.881 killing process with pid 67250 00:05:40.881 12:50:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.881 12:50:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.881 12:50:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67250' 00:05:40.881 12:50:21 -- common/autotest_common.sh@955 -- # kill 67250 00:05:40.881 12:50:21 -- common/autotest_common.sh@960 -- # wait 67250 00:05:41.140 00:05:41.140 real 0m3.147s 00:05:41.140 user 0m4.136s 00:05:41.140 sys 0m0.772s 00:05:41.140 12:50:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.140 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:41.140 ************************************ 00:05:41.140 END TEST rpc 00:05:41.140 ************************************ 00:05:41.140 12:50:21 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:41.140 12:50:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.140 12:50:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.140 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:41.140 ************************************ 00:05:41.140 START TEST rpc_client 00:05:41.140 ************************************ 00:05:41.140 12:50:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:41.398 * Looking for test storage... 00:05:41.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:41.398 12:50:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:41.398 12:50:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:41.398 12:50:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:41.398 12:50:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:41.398 12:50:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:41.398 12:50:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:41.398 12:50:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:41.398 12:50:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:41.398 12:50:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:41.398 12:50:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.398 12:50:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:41.398 12:50:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:41.398 12:50:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:41.398 12:50:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:41.398 12:50:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:41.398 12:50:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:41.398 12:50:22 -- scripts/common.sh@344 -- # : 1 00:05:41.398 12:50:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:41.398 12:50:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.398 12:50:22 -- scripts/common.sh@364 -- # decimal 1 00:05:41.398 12:50:22 -- scripts/common.sh@352 -- # local d=1 00:05:41.398 12:50:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.398 12:50:22 -- scripts/common.sh@354 -- # echo 1 00:05:41.398 12:50:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:41.398 12:50:22 -- scripts/common.sh@365 -- # decimal 2 00:05:41.398 12:50:22 -- scripts/common.sh@352 -- # local d=2 00:05:41.398 12:50:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.398 12:50:22 -- scripts/common.sh@354 -- # echo 2 00:05:41.398 12:50:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:41.398 12:50:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:41.398 12:50:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:41.398 12:50:22 -- scripts/common.sh@367 -- # return 0 00:05:41.398 12:50:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.398 12:50:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:41.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.398 --rc genhtml_branch_coverage=1 00:05:41.398 --rc genhtml_function_coverage=1 00:05:41.398 --rc genhtml_legend=1 00:05:41.398 --rc geninfo_all_blocks=1 00:05:41.398 --rc geninfo_unexecuted_blocks=1 00:05:41.398 00:05:41.398 ' 00:05:41.398 12:50:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:41.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.398 --rc genhtml_branch_coverage=1 00:05:41.398 --rc genhtml_function_coverage=1 00:05:41.398 --rc genhtml_legend=1 00:05:41.398 --rc geninfo_all_blocks=1 00:05:41.398 --rc geninfo_unexecuted_blocks=1 00:05:41.398 00:05:41.398 ' 00:05:41.398 12:50:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:41.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.398 --rc genhtml_branch_coverage=1 00:05:41.398 --rc genhtml_function_coverage=1 00:05:41.398 --rc genhtml_legend=1 00:05:41.398 --rc geninfo_all_blocks=1 00:05:41.398 --rc geninfo_unexecuted_blocks=1 00:05:41.398 00:05:41.398 ' 00:05:41.398 12:50:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:41.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.398 --rc genhtml_branch_coverage=1 00:05:41.398 --rc genhtml_function_coverage=1 00:05:41.398 --rc genhtml_legend=1 00:05:41.398 --rc geninfo_all_blocks=1 00:05:41.398 --rc geninfo_unexecuted_blocks=1 00:05:41.398 00:05:41.398 ' 00:05:41.398 12:50:22 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:41.398 OK 00:05:41.398 12:50:22 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.398 00:05:41.398 real 0m0.199s 00:05:41.398 user 0m0.123s 00:05:41.398 sys 0m0.089s 00:05:41.398 12:50:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.398 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.398 ************************************ 00:05:41.398 END TEST rpc_client 00:05:41.398 ************************************ 00:05:41.398 12:50:22 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.398 12:50:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.398 12:50:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.398 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.398 ************************************ 00:05:41.398 START TEST json_config 00:05:41.398 ************************************ 00:05:41.398 12:50:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.658 12:50:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:41.658 12:50:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:41.658 12:50:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:41.658 12:50:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:41.658 12:50:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:41.658 12:50:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:41.658 12:50:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:41.658 12:50:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:41.658 12:50:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:41.658 12:50:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.658 12:50:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:41.658 12:50:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:41.658 12:50:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:41.658 12:50:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:41.658 12:50:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:41.658 12:50:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:41.658 12:50:22 -- scripts/common.sh@344 -- # : 1 00:05:41.658 12:50:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:41.658 12:50:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.658 12:50:22 -- scripts/common.sh@364 -- # decimal 1 00:05:41.658 12:50:22 -- scripts/common.sh@352 -- # local d=1 00:05:41.658 12:50:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.658 12:50:22 -- scripts/common.sh@354 -- # echo 1 00:05:41.658 12:50:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:41.658 12:50:22 -- scripts/common.sh@365 -- # decimal 2 00:05:41.658 12:50:22 -- scripts/common.sh@352 -- # local d=2 00:05:41.658 12:50:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.658 12:50:22 -- scripts/common.sh@354 -- # echo 2 00:05:41.658 12:50:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:41.658 12:50:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:41.658 12:50:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:41.658 12:50:22 -- scripts/common.sh@367 -- # return 0 00:05:41.658 12:50:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.658 12:50:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:41.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.658 --rc genhtml_branch_coverage=1 00:05:41.658 --rc genhtml_function_coverage=1 00:05:41.658 --rc genhtml_legend=1 00:05:41.658 --rc geninfo_all_blocks=1 00:05:41.658 --rc geninfo_unexecuted_blocks=1 00:05:41.658 00:05:41.658 ' 00:05:41.658 12:50:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:41.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.658 --rc genhtml_branch_coverage=1 00:05:41.658 --rc genhtml_function_coverage=1 00:05:41.658 --rc genhtml_legend=1 00:05:41.658 --rc geninfo_all_blocks=1 00:05:41.658 --rc geninfo_unexecuted_blocks=1 00:05:41.658 00:05:41.658 ' 00:05:41.658 12:50:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:41.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.658 --rc genhtml_branch_coverage=1 00:05:41.658 --rc genhtml_function_coverage=1 00:05:41.658 --rc genhtml_legend=1 00:05:41.658 --rc geninfo_all_blocks=1 00:05:41.658 --rc geninfo_unexecuted_blocks=1 00:05:41.658 00:05:41.658 ' 00:05:41.658 12:50:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:41.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.658 --rc genhtml_branch_coverage=1 00:05:41.658 --rc genhtml_function_coverage=1 00:05:41.658 --rc genhtml_legend=1 00:05:41.658 --rc geninfo_all_blocks=1 00:05:41.658 --rc geninfo_unexecuted_blocks=1 00:05:41.658 00:05:41.658 ' 00:05:41.658 12:50:22 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.658 12:50:22 -- nvmf/common.sh@7 -- # uname -s 00:05:41.658 12:50:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.658 12:50:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.658 12:50:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.658 12:50:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.658 12:50:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.658 12:50:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.658 12:50:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.658 12:50:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.658 12:50:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.658 12:50:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.658 12:50:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:05:41.658 12:50:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:05:41.658 12:50:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.658 12:50:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.658 12:50:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.658 12:50:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.658 12:50:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.658 12:50:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.658 12:50:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.658 12:50:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.658 12:50:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.658 12:50:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.658 12:50:22 -- paths/export.sh@5 -- # export PATH 00:05:41.658 12:50:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.658 12:50:22 -- nvmf/common.sh@46 -- # : 0 00:05:41.658 12:50:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:41.658 12:50:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:41.658 12:50:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:41.658 12:50:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.659 12:50:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.659 12:50:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:41.659 12:50:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:41.659 12:50:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:41.659 12:50:22 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:41.659 12:50:22 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:41.659 12:50:22 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:41.659 12:50:22 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.659 12:50:22 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:41.659 12:50:22 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:41.659 12:50:22 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:41.659 12:50:22 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:41.659 12:50:22 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:41.659 12:50:22 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:41.659 12:50:22 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:41.659 12:50:22 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:41.659 12:50:22 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:41.659 12:50:22 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.659 12:50:22 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:41.659 INFO: JSON configuration test init 00:05:41.659 12:50:22 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:41.659 12:50:22 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:41.659 12:50:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.659 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.659 12:50:22 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:41.659 12:50:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.659 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.659 Waiting for target to run... 00:05:41.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.659 12:50:22 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:41.659 12:50:22 -- json_config/json_config.sh@98 -- # local app=target 00:05:41.659 12:50:22 -- json_config/json_config.sh@99 -- # shift 00:05:41.659 12:50:22 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:41.659 12:50:22 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:41.659 12:50:22 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:41.659 12:50:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:41.659 12:50:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:41.659 12:50:22 -- json_config/json_config.sh@111 -- # app_pid[$app]=67571 00:05:41.659 12:50:22 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:41.659 12:50:22 -- json_config/json_config.sh@114 -- # waitforlisten 67571 /var/tmp/spdk_tgt.sock 00:05:41.659 12:50:22 -- common/autotest_common.sh@829 -- # '[' -z 67571 ']' 00:05:41.659 12:50:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.659 12:50:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.659 12:50:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.659 12:50:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.659 12:50:22 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:41.659 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.659 [2024-12-13 12:50:22.412235] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:41.659 [2024-12-13 12:50:22.412333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67571 ] 00:05:42.226 [2024-12-13 12:50:22.835734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.226 [2024-12-13 12:50:22.880295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.226 [2024-12-13 12:50:22.880689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.793 12:50:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.793 12:50:23 -- common/autotest_common.sh@862 -- # return 0 00:05:42.793 12:50:23 -- json_config/json_config.sh@115 -- # echo '' 00:05:42.793 00:05:42.793 12:50:23 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:42.793 12:50:23 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:42.793 12:50:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.793 12:50:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.794 12:50:23 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:42.794 12:50:23 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:42.794 12:50:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.794 12:50:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.794 12:50:23 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:42.794 12:50:23 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:42.794 12:50:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:43.361 12:50:23 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:43.361 12:50:23 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:43.361 12:50:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.361 12:50:23 -- common/autotest_common.sh@10 -- # set +x 00:05:43.361 12:50:23 -- json_config/json_config.sh@48 -- # local ret=0 00:05:43.361 12:50:23 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:43.361 12:50:23 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:43.361 12:50:23 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:43.361 12:50:23 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:43.361 12:50:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:43.620 12:50:24 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:43.620 12:50:24 -- json_config/json_config.sh@51 -- # local get_types 00:05:43.620 12:50:24 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:43.620 12:50:24 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:43.620 12:50:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.620 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:05:43.620 12:50:24 -- json_config/json_config.sh@58 -- # return 0 00:05:43.620 12:50:24 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:43.620 12:50:24 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:43.620 12:50:24 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:43.620 12:50:24 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:43.620 12:50:24 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:43.620 12:50:24 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:43.620 12:50:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.620 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:05:43.620 12:50:24 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:43.620 12:50:24 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:43.620 12:50:24 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:43.620 12:50:24 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:43.620 12:50:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:43.878 MallocForNvmf0 00:05:43.878 12:50:24 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:43.878 12:50:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:44.136 MallocForNvmf1 00:05:44.136 12:50:24 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:44.136 12:50:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:44.395 [2024-12-13 12:50:24.971315] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.395 12:50:24 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:44.395 12:50:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:44.653 12:50:25 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:44.653 12:50:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:44.911 12:50:25 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:44.911 12:50:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:45.169 12:50:25 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:45.169 12:50:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:45.428 [2024-12-13 12:50:25.955897] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.428 12:50:25 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:45.428 12:50:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.428 12:50:25 -- common/autotest_common.sh@10 -- # set +x 00:05:45.428 12:50:26 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:45.428 12:50:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.428 12:50:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.428 12:50:26 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:45.428 12:50:26 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:45.428 12:50:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:45.687 MallocBdevForConfigChangeCheck 00:05:45.687 12:50:26 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:45.687 12:50:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.687 12:50:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.687 12:50:26 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:45.687 12:50:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.945 INFO: shutting down applications... 00:05:45.945 12:50:26 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:45.945 12:50:26 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:45.945 12:50:26 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:45.945 12:50:26 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:45.945 12:50:26 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:46.512 Calling clear_iscsi_subsystem 00:05:46.512 Calling clear_nvmf_subsystem 00:05:46.512 Calling clear_nbd_subsystem 00:05:46.512 Calling clear_ublk_subsystem 00:05:46.512 Calling clear_vhost_blk_subsystem 00:05:46.512 Calling clear_vhost_scsi_subsystem 00:05:46.512 Calling clear_scheduler_subsystem 00:05:46.512 Calling clear_bdev_subsystem 00:05:46.512 Calling clear_accel_subsystem 00:05:46.512 Calling clear_vmd_subsystem 00:05:46.512 Calling clear_sock_subsystem 00:05:46.512 Calling clear_iobuf_subsystem 00:05:46.512 12:50:27 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:46.512 12:50:27 -- json_config/json_config.sh@396 -- # count=100 00:05:46.512 12:50:27 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:46.512 12:50:27 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.512 12:50:27 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:46.512 12:50:27 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:46.771 12:50:27 -- json_config/json_config.sh@398 -- # break 00:05:46.771 12:50:27 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:46.771 12:50:27 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:46.771 12:50:27 -- json_config/json_config.sh@120 -- # local app=target 00:05:46.771 12:50:27 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:46.771 12:50:27 -- json_config/json_config.sh@124 -- # [[ -n 67571 ]] 00:05:46.771 12:50:27 -- json_config/json_config.sh@127 -- # kill -SIGINT 67571 00:05:46.771 12:50:27 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:46.771 12:50:27 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:46.771 12:50:27 -- json_config/json_config.sh@130 -- # kill -0 67571 00:05:46.771 12:50:27 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:47.391 12:50:27 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:47.391 12:50:27 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:47.391 12:50:27 -- json_config/json_config.sh@130 -- # kill -0 67571 00:05:47.391 12:50:27 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:47.391 12:50:27 -- json_config/json_config.sh@132 -- # break 00:05:47.391 12:50:27 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:47.391 SPDK target shutdown done 00:05:47.391 12:50:27 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:47.391 INFO: relaunching applications... 00:05:47.391 12:50:27 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:47.391 12:50:27 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:47.391 12:50:27 -- json_config/json_config.sh@98 -- # local app=target 00:05:47.391 12:50:27 -- json_config/json_config.sh@99 -- # shift 00:05:47.391 12:50:27 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:47.391 12:50:27 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:47.391 12:50:27 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:47.391 12:50:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:47.391 12:50:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:47.391 12:50:27 -- json_config/json_config.sh@111 -- # app_pid[$app]=67846 00:05:47.391 Waiting for target to run... 00:05:47.391 12:50:27 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:47.391 12:50:27 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:47.391 12:50:27 -- json_config/json_config.sh@114 -- # waitforlisten 67846 /var/tmp/spdk_tgt.sock 00:05:47.391 12:50:27 -- common/autotest_common.sh@829 -- # '[' -z 67846 ']' 00:05:47.391 12:50:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.391 12:50:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.391 12:50:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.391 12:50:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.391 12:50:27 -- common/autotest_common.sh@10 -- # set +x 00:05:47.391 [2024-12-13 12:50:28.006399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:47.391 [2024-12-13 12:50:28.006499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67846 ] 00:05:47.650 [2024-12-13 12:50:28.415613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.908 [2024-12-13 12:50:28.461497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.908 [2024-12-13 12:50:28.461654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.167 [2024-12-13 12:50:28.760790] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.167 [2024-12-13 12:50:28.792866] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:48.167 12:50:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.167 12:50:28 -- common/autotest_common.sh@862 -- # return 0 00:05:48.167 00:05:48.167 12:50:28 -- json_config/json_config.sh@115 -- # echo '' 00:05:48.167 12:50:28 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:48.167 INFO: Checking if target configuration is the same... 00:05:48.167 12:50:28 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:48.167 12:50:28 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.167 12:50:28 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:48.167 12:50:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.167 + '[' 2 -ne 2 ']' 00:05:48.167 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:48.425 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:48.425 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:48.425 +++ basename /dev/fd/62 00:05:48.425 ++ mktemp /tmp/62.XXX 00:05:48.425 + tmp_file_1=/tmp/62.Jz9 00:05:48.425 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.425 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.425 + tmp_file_2=/tmp/spdk_tgt_config.json.9EI 00:05:48.425 + ret=0 00:05:48.425 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:48.684 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:48.684 + diff -u /tmp/62.Jz9 /tmp/spdk_tgt_config.json.9EI 00:05:48.684 INFO: JSON config files are the same 00:05:48.684 + echo 'INFO: JSON config files are the same' 00:05:48.684 + rm /tmp/62.Jz9 /tmp/spdk_tgt_config.json.9EI 00:05:48.684 + exit 0 00:05:48.684 12:50:29 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:48.684 INFO: changing configuration and checking if this can be detected... 00:05:48.684 12:50:29 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:48.684 12:50:29 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.684 12:50:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.943 12:50:29 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.943 12:50:29 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:48.943 12:50:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.943 + '[' 2 -ne 2 ']' 00:05:48.943 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:48.943 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:48.943 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:48.943 +++ basename /dev/fd/62 00:05:48.943 ++ mktemp /tmp/62.XXX 00:05:48.943 + tmp_file_1=/tmp/62.ZSC 00:05:48.943 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.943 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.943 + tmp_file_2=/tmp/spdk_tgt_config.json.MCn 00:05:48.943 + ret=0 00:05:48.943 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:49.510 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:49.510 + diff -u /tmp/62.ZSC /tmp/spdk_tgt_config.json.MCn 00:05:49.510 + ret=1 00:05:49.510 + echo '=== Start of file: /tmp/62.ZSC ===' 00:05:49.510 + cat /tmp/62.ZSC 00:05:49.510 + echo '=== End of file: /tmp/62.ZSC ===' 00:05:49.510 + echo '' 00:05:49.510 + echo '=== Start of file: /tmp/spdk_tgt_config.json.MCn ===' 00:05:49.510 + cat /tmp/spdk_tgt_config.json.MCn 00:05:49.510 + echo '=== End of file: /tmp/spdk_tgt_config.json.MCn ===' 00:05:49.510 + echo '' 00:05:49.510 + rm /tmp/62.ZSC /tmp/spdk_tgt_config.json.MCn 00:05:49.510 + exit 1 00:05:49.510 INFO: configuration change detected. 00:05:49.510 12:50:30 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:49.510 12:50:30 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:49.510 12:50:30 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:49.510 12:50:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.510 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:05:49.510 12:50:30 -- json_config/json_config.sh@360 -- # local ret=0 00:05:49.510 12:50:30 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:49.510 12:50:30 -- json_config/json_config.sh@370 -- # [[ -n 67846 ]] 00:05:49.510 12:50:30 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:49.510 12:50:30 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:49.510 12:50:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.510 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:05:49.510 12:50:30 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:49.510 12:50:30 -- json_config/json_config.sh@246 -- # uname -s 00:05:49.510 12:50:30 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:49.510 12:50:30 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:49.510 12:50:30 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:49.510 12:50:30 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:49.510 12:50:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.510 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:05:49.510 12:50:30 -- json_config/json_config.sh@376 -- # killprocess 67846 00:05:49.510 12:50:30 -- common/autotest_common.sh@936 -- # '[' -z 67846 ']' 00:05:49.510 12:50:30 -- common/autotest_common.sh@940 -- # kill -0 67846 00:05:49.510 12:50:30 -- common/autotest_common.sh@941 -- # uname 00:05:49.510 12:50:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.510 12:50:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67846 00:05:49.510 12:50:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.510 12:50:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.510 killing process with pid 67846 00:05:49.510 12:50:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67846' 00:05:49.510 12:50:30 -- common/autotest_common.sh@955 -- # kill 67846 00:05:49.510 12:50:30 -- common/autotest_common.sh@960 -- # wait 67846 00:05:49.769 12:50:30 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.769 12:50:30 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:49.769 12:50:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.769 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:05:49.769 12:50:30 -- json_config/json_config.sh@381 -- # return 0 00:05:49.769 INFO: Success 00:05:49.769 12:50:30 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:49.769 00:05:49.769 real 0m8.292s 00:05:49.769 user 0m11.802s 00:05:49.769 sys 0m1.811s 00:05:49.769 12:50:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.769 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:05:49.769 ************************************ 00:05:49.769 END TEST json_config 00:05:49.769 ************************************ 00:05:49.769 12:50:30 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.769 12:50:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.769 12:50:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.769 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:05:49.769 ************************************ 00:05:49.769 START TEST json_config_extra_key 00:05:49.769 ************************************ 00:05:49.769 12:50:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.769 12:50:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:49.769 12:50:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:50.029 12:50:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:50.029 12:50:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:50.029 12:50:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:50.029 12:50:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:50.029 12:50:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:50.029 12:50:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:50.029 12:50:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:50.029 12:50:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.029 12:50:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:50.029 12:50:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:50.029 12:50:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:50.029 12:50:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:50.029 12:50:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:50.029 12:50:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:50.029 12:50:30 -- scripts/common.sh@344 -- # : 1 00:05:50.029 12:50:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:50.029 12:50:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.029 12:50:30 -- scripts/common.sh@364 -- # decimal 1 00:05:50.029 12:50:30 -- scripts/common.sh@352 -- # local d=1 00:05:50.029 12:50:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.029 12:50:30 -- scripts/common.sh@354 -- # echo 1 00:05:50.029 12:50:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:50.029 12:50:30 -- scripts/common.sh@365 -- # decimal 2 00:05:50.029 12:50:30 -- scripts/common.sh@352 -- # local d=2 00:05:50.029 12:50:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.029 12:50:30 -- scripts/common.sh@354 -- # echo 2 00:05:50.029 12:50:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:50.029 12:50:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:50.029 12:50:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:50.029 12:50:30 -- scripts/common.sh@367 -- # return 0 00:05:50.029 12:50:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.029 12:50:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:50.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.029 --rc genhtml_branch_coverage=1 00:05:50.029 --rc genhtml_function_coverage=1 00:05:50.029 --rc genhtml_legend=1 00:05:50.029 --rc geninfo_all_blocks=1 00:05:50.029 --rc geninfo_unexecuted_blocks=1 00:05:50.029 00:05:50.029 ' 00:05:50.029 12:50:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:50.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.029 --rc genhtml_branch_coverage=1 00:05:50.029 --rc genhtml_function_coverage=1 00:05:50.029 --rc genhtml_legend=1 00:05:50.029 --rc geninfo_all_blocks=1 00:05:50.029 --rc geninfo_unexecuted_blocks=1 00:05:50.029 00:05:50.029 ' 00:05:50.029 12:50:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:50.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.029 --rc genhtml_branch_coverage=1 00:05:50.029 --rc genhtml_function_coverage=1 00:05:50.029 --rc genhtml_legend=1 00:05:50.029 --rc geninfo_all_blocks=1 00:05:50.029 --rc geninfo_unexecuted_blocks=1 00:05:50.029 00:05:50.029 ' 00:05:50.029 12:50:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:50.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.029 --rc genhtml_branch_coverage=1 00:05:50.029 --rc genhtml_function_coverage=1 00:05:50.029 --rc genhtml_legend=1 00:05:50.029 --rc geninfo_all_blocks=1 00:05:50.029 --rc geninfo_unexecuted_blocks=1 00:05:50.029 00:05:50.029 ' 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:50.029 12:50:30 -- nvmf/common.sh@7 -- # uname -s 00:05:50.029 12:50:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.029 12:50:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.029 12:50:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.029 12:50:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.029 12:50:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.029 12:50:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.029 12:50:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.029 12:50:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.029 12:50:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.029 12:50:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.029 12:50:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:05:50.029 12:50:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:05:50.029 12:50:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.029 12:50:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.029 12:50:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.029 12:50:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.029 12:50:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.029 12:50:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.029 12:50:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.029 12:50:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.029 12:50:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.029 12:50:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.029 12:50:30 -- paths/export.sh@5 -- # export PATH 00:05:50.029 12:50:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.029 12:50:30 -- nvmf/common.sh@46 -- # : 0 00:05:50.029 12:50:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:50.029 12:50:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:50.029 12:50:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:50.029 12:50:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.029 12:50:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.029 12:50:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:50.029 12:50:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:50.029 12:50:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.029 INFO: launching applications... 00:05:50.029 Waiting for target to run... 00:05:50.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68029 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68029 /var/tmp/spdk_tgt.sock 00:05:50.029 12:50:30 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:50.029 12:50:30 -- common/autotest_common.sh@829 -- # '[' -z 68029 ']' 00:05:50.029 12:50:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.029 12:50:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.029 12:50:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.029 12:50:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.029 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.029 [2024-12-13 12:50:30.713048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:50.029 [2024-12-13 12:50:30.713163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68029 ] 00:05:50.597 [2024-12-13 12:50:31.134198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.597 [2024-12-13 12:50:31.180640] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.597 [2024-12-13 12:50:31.180815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.165 00:05:51.165 INFO: shutting down applications... 00:05:51.165 12:50:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.165 12:50:31 -- common/autotest_common.sh@862 -- # return 0 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68029 ]] 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68029 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68029 00:05:51.165 12:50:31 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:51.423 12:50:32 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:51.423 12:50:32 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:51.423 12:50:32 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68029 00:05:51.423 12:50:32 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:51.423 12:50:32 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:51.423 12:50:32 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:51.423 12:50:32 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:51.423 SPDK target shutdown done 00:05:51.423 12:50:32 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:51.423 Success 00:05:51.423 00:05:51.423 real 0m1.658s 00:05:51.423 user 0m1.449s 00:05:51.423 sys 0m0.445s 00:05:51.423 12:50:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.423 12:50:32 -- common/autotest_common.sh@10 -- # set +x 00:05:51.423 ************************************ 00:05:51.423 END TEST json_config_extra_key 00:05:51.423 ************************************ 00:05:51.423 12:50:32 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.423 12:50:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.423 12:50:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.423 12:50:32 -- common/autotest_common.sh@10 -- # set +x 00:05:51.682 ************************************ 00:05:51.682 START TEST alias_rpc 00:05:51.682 ************************************ 00:05:51.682 12:50:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.682 * Looking for test storage... 00:05:51.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:51.682 12:50:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:51.682 12:50:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:51.682 12:50:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:51.682 12:50:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:51.682 12:50:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:51.682 12:50:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:51.682 12:50:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:51.682 12:50:32 -- scripts/common.sh@335 -- # IFS=.-: 00:05:51.682 12:50:32 -- scripts/common.sh@335 -- # read -ra ver1 00:05:51.682 12:50:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.682 12:50:32 -- scripts/common.sh@336 -- # read -ra ver2 00:05:51.682 12:50:32 -- scripts/common.sh@337 -- # local 'op=<' 00:05:51.682 12:50:32 -- scripts/common.sh@339 -- # ver1_l=2 00:05:51.682 12:50:32 -- scripts/common.sh@340 -- # ver2_l=1 00:05:51.682 12:50:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:51.682 12:50:32 -- scripts/common.sh@343 -- # case "$op" in 00:05:51.682 12:50:32 -- scripts/common.sh@344 -- # : 1 00:05:51.682 12:50:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:51.682 12:50:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.682 12:50:32 -- scripts/common.sh@364 -- # decimal 1 00:05:51.682 12:50:32 -- scripts/common.sh@352 -- # local d=1 00:05:51.682 12:50:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.682 12:50:32 -- scripts/common.sh@354 -- # echo 1 00:05:51.682 12:50:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:51.682 12:50:32 -- scripts/common.sh@365 -- # decimal 2 00:05:51.682 12:50:32 -- scripts/common.sh@352 -- # local d=2 00:05:51.682 12:50:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.682 12:50:32 -- scripts/common.sh@354 -- # echo 2 00:05:51.682 12:50:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:51.682 12:50:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:51.682 12:50:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:51.682 12:50:32 -- scripts/common.sh@367 -- # return 0 00:05:51.682 12:50:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.682 12:50:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:51.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.682 --rc genhtml_branch_coverage=1 00:05:51.682 --rc genhtml_function_coverage=1 00:05:51.682 --rc genhtml_legend=1 00:05:51.682 --rc geninfo_all_blocks=1 00:05:51.682 --rc geninfo_unexecuted_blocks=1 00:05:51.682 00:05:51.682 ' 00:05:51.682 12:50:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:51.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.682 --rc genhtml_branch_coverage=1 00:05:51.682 --rc genhtml_function_coverage=1 00:05:51.682 --rc genhtml_legend=1 00:05:51.682 --rc geninfo_all_blocks=1 00:05:51.682 --rc geninfo_unexecuted_blocks=1 00:05:51.682 00:05:51.682 ' 00:05:51.682 12:50:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:51.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.682 --rc genhtml_branch_coverage=1 00:05:51.682 --rc genhtml_function_coverage=1 00:05:51.682 --rc genhtml_legend=1 00:05:51.682 --rc geninfo_all_blocks=1 00:05:51.682 --rc geninfo_unexecuted_blocks=1 00:05:51.682 00:05:51.682 ' 00:05:51.682 12:50:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:51.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.682 --rc genhtml_branch_coverage=1 00:05:51.682 --rc genhtml_function_coverage=1 00:05:51.682 --rc genhtml_legend=1 00:05:51.682 --rc geninfo_all_blocks=1 00:05:51.682 --rc geninfo_unexecuted_blocks=1 00:05:51.682 00:05:51.682 ' 00:05:51.682 12:50:32 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.683 12:50:32 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68107 00:05:51.683 12:50:32 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68107 00:05:51.683 12:50:32 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.683 12:50:32 -- common/autotest_common.sh@829 -- # '[' -z 68107 ']' 00:05:51.683 12:50:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.683 12:50:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.683 12:50:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.683 12:50:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.683 12:50:32 -- common/autotest_common.sh@10 -- # set +x 00:05:51.683 [2024-12-13 12:50:32.437766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:51.683 [2024-12-13 12:50:32.437864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68107 ] 00:05:51.941 [2024-12-13 12:50:32.572936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.941 [2024-12-13 12:50:32.634657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.941 [2024-12-13 12:50:32.634847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.877 12:50:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.877 12:50:33 -- common/autotest_common.sh@862 -- # return 0 00:05:52.877 12:50:33 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:53.136 12:50:33 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68107 00:05:53.136 12:50:33 -- common/autotest_common.sh@936 -- # '[' -z 68107 ']' 00:05:53.136 12:50:33 -- common/autotest_common.sh@940 -- # kill -0 68107 00:05:53.136 12:50:33 -- common/autotest_common.sh@941 -- # uname 00:05:53.136 12:50:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.136 12:50:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68107 00:05:53.136 12:50:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.136 12:50:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.136 12:50:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68107' 00:05:53.136 killing process with pid 68107 00:05:53.136 12:50:33 -- common/autotest_common.sh@955 -- # kill 68107 00:05:53.136 12:50:33 -- common/autotest_common.sh@960 -- # wait 68107 00:05:53.395 00:05:53.395 real 0m1.899s 00:05:53.395 user 0m2.168s 00:05:53.395 sys 0m0.481s 00:05:53.395 12:50:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.395 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:05:53.395 ************************************ 00:05:53.395 END TEST alias_rpc 00:05:53.395 ************************************ 00:05:53.395 12:50:34 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:53.395 12:50:34 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.395 12:50:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.395 12:50:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.395 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:05:53.395 ************************************ 00:05:53.395 START TEST dpdk_mem_utility 00:05:53.395 ************************************ 00:05:53.395 12:50:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.653 * Looking for test storage... 00:05:53.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:53.653 12:50:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:53.653 12:50:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:53.653 12:50:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:53.653 12:50:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:53.654 12:50:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:53.654 12:50:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:53.654 12:50:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:53.654 12:50:34 -- scripts/common.sh@335 -- # IFS=.-: 00:05:53.654 12:50:34 -- scripts/common.sh@335 -- # read -ra ver1 00:05:53.654 12:50:34 -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.654 12:50:34 -- scripts/common.sh@336 -- # read -ra ver2 00:05:53.654 12:50:34 -- scripts/common.sh@337 -- # local 'op=<' 00:05:53.654 12:50:34 -- scripts/common.sh@339 -- # ver1_l=2 00:05:53.654 12:50:34 -- scripts/common.sh@340 -- # ver2_l=1 00:05:53.654 12:50:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:53.654 12:50:34 -- scripts/common.sh@343 -- # case "$op" in 00:05:53.654 12:50:34 -- scripts/common.sh@344 -- # : 1 00:05:53.654 12:50:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:53.654 12:50:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.654 12:50:34 -- scripts/common.sh@364 -- # decimal 1 00:05:53.654 12:50:34 -- scripts/common.sh@352 -- # local d=1 00:05:53.654 12:50:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.654 12:50:34 -- scripts/common.sh@354 -- # echo 1 00:05:53.654 12:50:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:53.654 12:50:34 -- scripts/common.sh@365 -- # decimal 2 00:05:53.654 12:50:34 -- scripts/common.sh@352 -- # local d=2 00:05:53.654 12:50:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.654 12:50:34 -- scripts/common.sh@354 -- # echo 2 00:05:53.654 12:50:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:53.654 12:50:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:53.654 12:50:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:53.654 12:50:34 -- scripts/common.sh@367 -- # return 0 00:05:53.654 12:50:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.654 12:50:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:53.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.654 --rc genhtml_branch_coverage=1 00:05:53.654 --rc genhtml_function_coverage=1 00:05:53.654 --rc genhtml_legend=1 00:05:53.654 --rc geninfo_all_blocks=1 00:05:53.654 --rc geninfo_unexecuted_blocks=1 00:05:53.654 00:05:53.654 ' 00:05:53.654 12:50:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:53.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.654 --rc genhtml_branch_coverage=1 00:05:53.654 --rc genhtml_function_coverage=1 00:05:53.654 --rc genhtml_legend=1 00:05:53.654 --rc geninfo_all_blocks=1 00:05:53.654 --rc geninfo_unexecuted_blocks=1 00:05:53.654 00:05:53.654 ' 00:05:53.654 12:50:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:53.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.654 --rc genhtml_branch_coverage=1 00:05:53.654 --rc genhtml_function_coverage=1 00:05:53.654 --rc genhtml_legend=1 00:05:53.654 --rc geninfo_all_blocks=1 00:05:53.654 --rc geninfo_unexecuted_blocks=1 00:05:53.654 00:05:53.654 ' 00:05:53.654 12:50:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:53.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.654 --rc genhtml_branch_coverage=1 00:05:53.654 --rc genhtml_function_coverage=1 00:05:53.654 --rc genhtml_legend=1 00:05:53.654 --rc geninfo_all_blocks=1 00:05:53.654 --rc geninfo_unexecuted_blocks=1 00:05:53.654 00:05:53.654 ' 00:05:53.654 12:50:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:53.654 12:50:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68206 00:05:53.654 12:50:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68206 00:05:53.654 12:50:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.654 12:50:34 -- common/autotest_common.sh@829 -- # '[' -z 68206 ']' 00:05:53.654 12:50:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.654 12:50:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.654 12:50:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.654 12:50:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.654 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:05:53.654 [2024-12-13 12:50:34.388113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:53.654 [2024-12-13 12:50:34.388209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68206 ] 00:05:53.913 [2024-12-13 12:50:34.524475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.913 [2024-12-13 12:50:34.576531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.913 [2024-12-13 12:50:34.576709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.848 12:50:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.848 12:50:35 -- common/autotest_common.sh@862 -- # return 0 00:05:54.848 12:50:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.848 12:50:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.848 12:50:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.848 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:05:54.848 { 00:05:54.848 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.848 } 00:05:54.848 12:50:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.848 12:50:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:54.848 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:54.848 1 heaps totaling size 814.000000 MiB 00:05:54.848 size: 814.000000 MiB heap id: 0 00:05:54.848 end heaps---------- 00:05:54.848 8 mempools totaling size 598.116089 MiB 00:05:54.848 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.848 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.848 size: 84.521057 MiB name: bdev_io_68206 00:05:54.848 size: 51.011292 MiB name: evtpool_68206 00:05:54.848 size: 50.003479 MiB name: msgpool_68206 00:05:54.848 size: 21.763794 MiB name: PDU_Pool 00:05:54.848 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.848 size: 0.026123 MiB name: Session_Pool 00:05:54.848 end mempools------- 00:05:54.848 6 memzones totaling size 4.142822 MiB 00:05:54.848 size: 1.000366 MiB name: RG_ring_0_68206 00:05:54.848 size: 1.000366 MiB name: RG_ring_1_68206 00:05:54.848 size: 1.000366 MiB name: RG_ring_4_68206 00:05:54.848 size: 1.000366 MiB name: RG_ring_5_68206 00:05:54.848 size: 0.125366 MiB name: RG_ring_2_68206 00:05:54.848 size: 0.015991 MiB name: RG_ring_3_68206 00:05:54.848 end memzones------- 00:05:54.848 12:50:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.848 heap id: 0 total size: 814.000000 MiB number of busy elements: 224 number of free elements: 15 00:05:54.848 list of free elements. size: 12.485840 MiB 00:05:54.848 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:54.848 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:54.848 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:54.848 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:54.848 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:54.848 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:54.848 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:54.848 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:54.848 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:54.848 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:05:54.848 element at address: 0x20000b200000 with size: 0.489441 MiB 00:05:54.848 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:54.848 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:54.848 element at address: 0x200027e00000 with size: 0.397766 MiB 00:05:54.848 element at address: 0x200003a00000 with size: 0.351501 MiB 00:05:54.848 list of standard malloc elements. size: 199.251587 MiB 00:05:54.848 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:54.848 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:54.848 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:54.848 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:54.848 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:54.848 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:54.848 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:54.848 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:54.848 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:54.848 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:54.848 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:54.849 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e65d40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6ca00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:54.849 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:54.849 list of memzone associated elements. size: 602.262573 MiB 00:05:54.849 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:54.849 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.849 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:54.849 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.849 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:54.849 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68206_0 00:05:54.849 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:54.849 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68206_0 00:05:54.849 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:54.849 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68206_0 00:05:54.849 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:54.849 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.849 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:54.849 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.849 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:54.849 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68206 00:05:54.849 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:54.849 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68206 00:05:54.849 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:54.849 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68206 00:05:54.850 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:54.850 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.850 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:54.850 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.850 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:54.850 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.850 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:54.850 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.850 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:54.850 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68206 00:05:54.850 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:54.850 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68206 00:05:54.850 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:54.850 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68206 00:05:54.850 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:54.850 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68206 00:05:54.850 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:54.850 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68206 00:05:54.850 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:54.850 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.850 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:54.850 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.850 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:54.850 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.850 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:54.850 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68206 00:05:54.850 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:54.850 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.850 element at address: 0x200027e65ec0 with size: 0.023743 MiB 00:05:54.850 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.850 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:54.850 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68206 00:05:54.850 element at address: 0x200027e6c000 with size: 0.002441 MiB 00:05:54.850 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.850 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:54.850 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68206 00:05:54.850 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:54.850 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68206 00:05:54.850 element at address: 0x200027e6cac0 with size: 0.000305 MiB 00:05:54.850 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.850 12:50:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.850 12:50:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68206 00:05:54.850 12:50:35 -- common/autotest_common.sh@936 -- # '[' -z 68206 ']' 00:05:54.850 12:50:35 -- common/autotest_common.sh@940 -- # kill -0 68206 00:05:54.850 12:50:35 -- common/autotest_common.sh@941 -- # uname 00:05:54.850 12:50:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.850 12:50:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68206 00:05:54.850 12:50:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.850 12:50:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.850 killing process with pid 68206 00:05:54.850 12:50:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68206' 00:05:54.850 12:50:35 -- common/autotest_common.sh@955 -- # kill 68206 00:05:54.850 12:50:35 -- common/autotest_common.sh@960 -- # wait 68206 00:05:55.417 ************************************ 00:05:55.417 END TEST dpdk_mem_utility 00:05:55.417 00:05:55.417 real 0m1.739s 00:05:55.417 user 0m1.878s 00:05:55.417 sys 0m0.463s 00:05:55.417 12:50:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.417 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.417 ************************************ 00:05:55.417 12:50:35 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:55.417 12:50:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.417 12:50:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.417 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.417 ************************************ 00:05:55.417 START TEST event 00:05:55.417 ************************************ 00:05:55.417 12:50:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:55.417 * Looking for test storage... 00:05:55.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:55.417 12:50:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:55.417 12:50:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:55.417 12:50:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:55.417 12:50:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:55.417 12:50:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:55.417 12:50:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.417 12:50:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.417 12:50:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.417 12:50:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.417 12:50:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.417 12:50:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.417 12:50:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.417 12:50:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.417 12:50:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.417 12:50:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.417 12:50:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.417 12:50:36 -- scripts/common.sh@344 -- # : 1 00:05:55.418 12:50:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.418 12:50:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.418 12:50:36 -- scripts/common.sh@364 -- # decimal 1 00:05:55.418 12:50:36 -- scripts/common.sh@352 -- # local d=1 00:05:55.418 12:50:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.418 12:50:36 -- scripts/common.sh@354 -- # echo 1 00:05:55.418 12:50:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.418 12:50:36 -- scripts/common.sh@365 -- # decimal 2 00:05:55.418 12:50:36 -- scripts/common.sh@352 -- # local d=2 00:05:55.418 12:50:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.418 12:50:36 -- scripts/common.sh@354 -- # echo 2 00:05:55.418 12:50:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.418 12:50:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.418 12:50:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.418 12:50:36 -- scripts/common.sh@367 -- # return 0 00:05:55.418 12:50:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.418 12:50:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.418 --rc genhtml_branch_coverage=1 00:05:55.418 --rc genhtml_function_coverage=1 00:05:55.418 --rc genhtml_legend=1 00:05:55.418 --rc geninfo_all_blocks=1 00:05:55.418 --rc geninfo_unexecuted_blocks=1 00:05:55.418 00:05:55.418 ' 00:05:55.418 12:50:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.418 --rc genhtml_branch_coverage=1 00:05:55.418 --rc genhtml_function_coverage=1 00:05:55.418 --rc genhtml_legend=1 00:05:55.418 --rc geninfo_all_blocks=1 00:05:55.418 --rc geninfo_unexecuted_blocks=1 00:05:55.418 00:05:55.418 ' 00:05:55.418 12:50:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.418 --rc genhtml_branch_coverage=1 00:05:55.418 --rc genhtml_function_coverage=1 00:05:55.418 --rc genhtml_legend=1 00:05:55.418 --rc geninfo_all_blocks=1 00:05:55.418 --rc geninfo_unexecuted_blocks=1 00:05:55.418 00:05:55.418 ' 00:05:55.418 12:50:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.418 --rc genhtml_branch_coverage=1 00:05:55.418 --rc genhtml_function_coverage=1 00:05:55.418 --rc genhtml_legend=1 00:05:55.418 --rc geninfo_all_blocks=1 00:05:55.418 --rc geninfo_unexecuted_blocks=1 00:05:55.418 00:05:55.418 ' 00:05:55.418 12:50:36 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:55.418 12:50:36 -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.418 12:50:36 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.418 12:50:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:55.418 12:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.418 12:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:55.418 ************************************ 00:05:55.418 START TEST event_perf 00:05:55.418 ************************************ 00:05:55.418 12:50:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.418 Running I/O for 1 seconds...[2024-12-13 12:50:36.142468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:55.418 [2024-12-13 12:50:36.142570] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68308 ] 00:05:55.676 [2024-12-13 12:50:36.273619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.676 [2024-12-13 12:50:36.326056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.676 [2024-12-13 12:50:36.326197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.676 [2024-12-13 12:50:36.326343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.676 [2024-12-13 12:50:36.326344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.611 Running I/O for 1 seconds... 00:05:56.611 lcore 0: 215934 00:05:56.611 lcore 1: 215934 00:05:56.611 lcore 2: 215933 00:05:56.611 lcore 3: 215934 00:05:56.611 done. 00:05:56.611 00:05:56.611 real 0m1.255s 00:05:56.611 user 0m4.085s 00:05:56.611 sys 0m0.053s 00:05:56.611 12:50:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.611 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:05:56.611 ************************************ 00:05:56.611 END TEST event_perf 00:05:56.611 ************************************ 00:05:56.913 12:50:37 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:56.913 12:50:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:56.913 12:50:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.913 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:05:56.913 ************************************ 00:05:56.913 START TEST event_reactor 00:05:56.913 ************************************ 00:05:56.913 12:50:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:56.913 [2024-12-13 12:50:37.445341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:56.913 [2024-12-13 12:50:37.445427] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68341 ] 00:05:56.913 [2024-12-13 12:50:37.575846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.913 [2024-12-13 12:50:37.646286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.312 test_start 00:05:58.312 oneshot 00:05:58.312 tick 100 00:05:58.312 tick 100 00:05:58.312 tick 250 00:05:58.312 tick 100 00:05:58.312 tick 100 00:05:58.312 tick 100 00:05:58.312 tick 250 00:05:58.312 tick 500 00:05:58.312 tick 100 00:05:58.312 tick 100 00:05:58.312 tick 250 00:05:58.312 tick 100 00:05:58.312 tick 100 00:05:58.312 test_end 00:05:58.312 00:05:58.312 real 0m1.264s 00:05:58.312 user 0m1.102s 00:05:58.312 sys 0m0.057s 00:05:58.312 12:50:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.312 ************************************ 00:05:58.312 END TEST event_reactor 00:05:58.312 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:58.312 ************************************ 00:05:58.312 12:50:38 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.312 12:50:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:58.312 12:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.312 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:58.312 ************************************ 00:05:58.312 START TEST event_reactor_perf 00:05:58.312 ************************************ 00:05:58.312 12:50:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.312 [2024-12-13 12:50:38.767595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:58.312 [2024-12-13 12:50:38.767690] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68371 ] 00:05:58.312 [2024-12-13 12:50:38.903092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.312 [2024-12-13 12:50:38.951933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.247 test_start 00:05:59.247 test_end 00:05:59.247 Performance: 465203 events per second 00:05:59.247 00:05:59.247 real 0m1.252s 00:05:59.247 user 0m1.090s 00:05:59.247 sys 0m0.057s 00:05:59.247 12:50:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.247 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:05:59.247 ************************************ 00:05:59.247 END TEST event_reactor_perf 00:05:59.247 ************************************ 00:05:59.505 12:50:40 -- event/event.sh@49 -- # uname -s 00:05:59.506 12:50:40 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:59.506 12:50:40 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:59.506 12:50:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.506 12:50:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.506 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:05:59.506 ************************************ 00:05:59.506 START TEST event_scheduler 00:05:59.506 ************************************ 00:05:59.506 12:50:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:59.506 * Looking for test storage... 00:05:59.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:59.506 12:50:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.506 12:50:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.506 12:50:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.506 12:50:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.506 12:50:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.506 12:50:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.506 12:50:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.506 12:50:40 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.506 12:50:40 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.506 12:50:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.506 12:50:40 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.506 12:50:40 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.506 12:50:40 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.506 12:50:40 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.506 12:50:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.506 12:50:40 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.506 12:50:40 -- scripts/common.sh@344 -- # : 1 00:05:59.506 12:50:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.506 12:50:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.506 12:50:40 -- scripts/common.sh@364 -- # decimal 1 00:05:59.506 12:50:40 -- scripts/common.sh@352 -- # local d=1 00:05:59.506 12:50:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.506 12:50:40 -- scripts/common.sh@354 -- # echo 1 00:05:59.506 12:50:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.506 12:50:40 -- scripts/common.sh@365 -- # decimal 2 00:05:59.506 12:50:40 -- scripts/common.sh@352 -- # local d=2 00:05:59.506 12:50:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.506 12:50:40 -- scripts/common.sh@354 -- # echo 2 00:05:59.506 12:50:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.506 12:50:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.506 12:50:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.506 12:50:40 -- scripts/common.sh@367 -- # return 0 00:05:59.506 12:50:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.506 12:50:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.506 --rc genhtml_branch_coverage=1 00:05:59.506 --rc genhtml_function_coverage=1 00:05:59.506 --rc genhtml_legend=1 00:05:59.506 --rc geninfo_all_blocks=1 00:05:59.506 --rc geninfo_unexecuted_blocks=1 00:05:59.506 00:05:59.506 ' 00:05:59.506 12:50:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.506 --rc genhtml_branch_coverage=1 00:05:59.506 --rc genhtml_function_coverage=1 00:05:59.506 --rc genhtml_legend=1 00:05:59.506 --rc geninfo_all_blocks=1 00:05:59.506 --rc geninfo_unexecuted_blocks=1 00:05:59.506 00:05:59.506 ' 00:05:59.506 12:50:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.506 --rc genhtml_branch_coverage=1 00:05:59.506 --rc genhtml_function_coverage=1 00:05:59.506 --rc genhtml_legend=1 00:05:59.506 --rc geninfo_all_blocks=1 00:05:59.506 --rc geninfo_unexecuted_blocks=1 00:05:59.506 00:05:59.506 ' 00:05:59.506 12:50:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.506 --rc genhtml_branch_coverage=1 00:05:59.506 --rc genhtml_function_coverage=1 00:05:59.506 --rc genhtml_legend=1 00:05:59.506 --rc geninfo_all_blocks=1 00:05:59.506 --rc geninfo_unexecuted_blocks=1 00:05:59.506 00:05:59.506 ' 00:05:59.506 12:50:40 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:59.506 12:50:40 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68445 00:05:59.506 12:50:40 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:59.506 12:50:40 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.506 12:50:40 -- scheduler/scheduler.sh@37 -- # waitforlisten 68445 00:05:59.506 12:50:40 -- common/autotest_common.sh@829 -- # '[' -z 68445 ']' 00:05:59.506 12:50:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.506 12:50:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.506 12:50:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.506 12:50:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.506 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:05:59.764 [2024-12-13 12:50:40.289047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:59.764 [2024-12-13 12:50:40.289142] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68445 ] 00:05:59.764 [2024-12-13 12:50:40.427675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.764 [2024-12-13 12:50:40.493785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.764 [2024-12-13 12:50:40.493924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.764 [2024-12-13 12:50:40.494051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.764 [2024-12-13 12:50:40.494053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.764 12:50:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.764 12:50:40 -- common/autotest_common.sh@862 -- # return 0 00:05:59.764 12:50:40 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.764 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.764 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.022 POWER: Env isn't set yet! 00:06:00.022 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:00.022 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.022 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.022 POWER: Attempting to initialise PSTAT power management... 00:06:00.022 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.022 POWER: Cannot set governor of lcore 0 to performance 00:06:00.022 POWER: Attempting to initialise CPPC power management... 00:06:00.022 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.023 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.023 POWER: Attempting to initialise VM power management... 00:06:00.023 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:00.023 POWER: Unable to set Power Management Environment for lcore 0 00:06:00.023 [2024-12-13 12:50:40.545664] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:00.023 [2024-12-13 12:50:40.545831] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:00.023 [2024-12-13 12:50:40.545945] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:00.023 [2024-12-13 12:50:40.546094] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:00.023 [2024-12-13 12:50:40.546116] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:00.023 [2024-12-13 12:50:40.546126] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 [2024-12-13 12:50:40.637224] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:00.023 12:50:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.023 12:50:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 ************************************ 00:06:00.023 START TEST scheduler_create_thread 00:06:00.023 ************************************ 00:06:00.023 12:50:40 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 2 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 3 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 4 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 5 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 6 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 7 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 8 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 9 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 10 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 12:50:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.023 12:50:40 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:00.023 12:50:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.023 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.926 12:50:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.926 12:50:42 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.926 12:50:42 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.926 12:50:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.926 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:06:02.494 ************************************ 00:06:02.494 END TEST scheduler_create_thread 00:06:02.494 ************************************ 00:06:02.494 12:50:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.494 00:06:02.494 real 0m2.615s 00:06:02.494 user 0m0.017s 00:06:02.494 sys 0m0.007s 00:06:02.494 12:50:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.494 12:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:02.753 12:50:43 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.753 12:50:43 -- scheduler/scheduler.sh@46 -- # killprocess 68445 00:06:02.753 12:50:43 -- common/autotest_common.sh@936 -- # '[' -z 68445 ']' 00:06:02.753 12:50:43 -- common/autotest_common.sh@940 -- # kill -0 68445 00:06:02.753 12:50:43 -- common/autotest_common.sh@941 -- # uname 00:06:02.753 12:50:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.753 12:50:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68445 00:06:02.753 killing process with pid 68445 00:06:02.753 12:50:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:02.753 12:50:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:02.753 12:50:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68445' 00:06:02.753 12:50:43 -- common/autotest_common.sh@955 -- # kill 68445 00:06:02.753 12:50:43 -- common/autotest_common.sh@960 -- # wait 68445 00:06:03.012 [2024-12-13 12:50:43.744671] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:03.271 00:06:03.271 real 0m3.888s 00:06:03.271 user 0m5.692s 00:06:03.271 sys 0m0.348s 00:06:03.271 12:50:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.271 12:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:03.271 ************************************ 00:06:03.271 END TEST event_scheduler 00:06:03.271 ************************************ 00:06:03.271 12:50:43 -- event/event.sh@51 -- # modprobe -n nbd 00:06:03.271 12:50:43 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:03.271 12:50:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.271 12:50:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.271 12:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:03.271 ************************************ 00:06:03.271 START TEST app_repeat 00:06:03.271 ************************************ 00:06:03.271 12:50:44 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:03.271 12:50:44 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.271 12:50:44 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.271 12:50:44 -- event/event.sh@13 -- # local nbd_list 00:06:03.271 12:50:44 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.271 12:50:44 -- event/event.sh@14 -- # local bdev_list 00:06:03.271 12:50:44 -- event/event.sh@15 -- # local repeat_times=4 00:06:03.271 12:50:44 -- event/event.sh@17 -- # modprobe nbd 00:06:03.271 12:50:44 -- event/event.sh@19 -- # repeat_pid=68544 00:06:03.271 12:50:44 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:03.271 12:50:44 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.271 Process app_repeat pid: 68544 00:06:03.271 12:50:44 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68544' 00:06:03.271 spdk_app_start Round 0 00:06:03.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.271 12:50:44 -- event/event.sh@23 -- # for i in {0..2} 00:06:03.271 12:50:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:03.271 12:50:44 -- event/event.sh@25 -- # waitforlisten 68544 /var/tmp/spdk-nbd.sock 00:06:03.271 12:50:44 -- common/autotest_common.sh@829 -- # '[' -z 68544 ']' 00:06:03.271 12:50:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.271 12:50:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.271 12:50:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.271 12:50:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.271 12:50:44 -- common/autotest_common.sh@10 -- # set +x 00:06:03.271 [2024-12-13 12:50:44.028445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:03.271 [2024-12-13 12:50:44.028546] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68544 ] 00:06:03.530 [2024-12-13 12:50:44.160400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.530 [2024-12-13 12:50:44.215010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.530 [2024-12-13 12:50:44.215018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.469 12:50:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.469 12:50:45 -- common/autotest_common.sh@862 -- # return 0 00:06:04.469 12:50:45 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.469 Malloc0 00:06:04.729 12:50:45 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.987 Malloc1 00:06:04.987 12:50:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.987 12:50:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.987 12:50:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.987 12:50:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.987 12:50:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.987 12:50:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@12 -- # local i 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.988 12:50:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.246 /dev/nbd0 00:06:05.246 12:50:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.246 12:50:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.246 12:50:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:05.246 12:50:45 -- common/autotest_common.sh@867 -- # local i 00:06:05.246 12:50:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.246 12:50:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.246 12:50:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:05.246 12:50:45 -- common/autotest_common.sh@871 -- # break 00:06:05.246 12:50:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.246 12:50:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.247 12:50:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.247 1+0 records in 00:06:05.247 1+0 records out 00:06:05.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324965 s, 12.6 MB/s 00:06:05.247 12:50:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.247 12:50:45 -- common/autotest_common.sh@884 -- # size=4096 00:06:05.247 12:50:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.247 12:50:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.247 12:50:45 -- common/autotest_common.sh@887 -- # return 0 00:06:05.247 12:50:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.247 12:50:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.247 12:50:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.506 /dev/nbd1 00:06:05.506 12:50:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.506 12:50:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.506 12:50:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:05.506 12:50:46 -- common/autotest_common.sh@867 -- # local i 00:06:05.506 12:50:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.506 12:50:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.506 12:50:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:05.506 12:50:46 -- common/autotest_common.sh@871 -- # break 00:06:05.506 12:50:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.506 12:50:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.506 12:50:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.506 1+0 records in 00:06:05.506 1+0 records out 00:06:05.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283989 s, 14.4 MB/s 00:06:05.506 12:50:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.506 12:50:46 -- common/autotest_common.sh@884 -- # size=4096 00:06:05.506 12:50:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.506 12:50:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.506 12:50:46 -- common/autotest_common.sh@887 -- # return 0 00:06:05.506 12:50:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.506 12:50:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.506 12:50:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.506 12:50:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.506 12:50:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.765 { 00:06:05.765 "bdev_name": "Malloc0", 00:06:05.765 "nbd_device": "/dev/nbd0" 00:06:05.765 }, 00:06:05.765 { 00:06:05.765 "bdev_name": "Malloc1", 00:06:05.765 "nbd_device": "/dev/nbd1" 00:06:05.765 } 00:06:05.765 ]' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.765 { 00:06:05.765 "bdev_name": "Malloc0", 00:06:05.765 "nbd_device": "/dev/nbd0" 00:06:05.765 }, 00:06:05.765 { 00:06:05.765 "bdev_name": "Malloc1", 00:06:05.765 "nbd_device": "/dev/nbd1" 00:06:05.765 } 00:06:05.765 ]' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.765 /dev/nbd1' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.765 /dev/nbd1' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.765 256+0 records in 00:06:05.765 256+0 records out 00:06:05.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0058929 s, 178 MB/s 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.765 256+0 records in 00:06:05.765 256+0 records out 00:06:05.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230346 s, 45.5 MB/s 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.765 256+0 records in 00:06:05.765 256+0 records out 00:06:05.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261307 s, 40.1 MB/s 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@51 -- # local i 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.765 12:50:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.023 12:50:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.023 12:50:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.023 12:50:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.023 12:50:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.023 12:50:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.023 12:50:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.023 12:50:46 -- bdev/nbd_common.sh@41 -- # break 00:06:06.024 12:50:46 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.024 12:50:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.024 12:50:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@41 -- # break 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.282 12:50:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@65 -- # true 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.541 12:50:47 -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.541 12:50:47 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.109 12:50:47 -- event/event.sh@35 -- # sleep 3 00:06:07.109 [2024-12-13 12:50:47.744426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.109 [2024-12-13 12:50:47.786299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.109 [2024-12-13 12:50:47.786309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.109 [2024-12-13 12:50:47.837745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.109 [2024-12-13 12:50:47.837841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.395 spdk_app_start Round 1 00:06:10.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.396 12:50:50 -- event/event.sh@23 -- # for i in {0..2} 00:06:10.396 12:50:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:10.396 12:50:50 -- event/event.sh@25 -- # waitforlisten 68544 /var/tmp/spdk-nbd.sock 00:06:10.396 12:50:50 -- common/autotest_common.sh@829 -- # '[' -z 68544 ']' 00:06:10.396 12:50:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.396 12:50:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.396 12:50:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.396 12:50:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.396 12:50:50 -- common/autotest_common.sh@10 -- # set +x 00:06:10.396 12:50:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.396 12:50:50 -- common/autotest_common.sh@862 -- # return 0 00:06:10.396 12:50:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.396 Malloc0 00:06:10.396 12:50:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.654 Malloc1 00:06:10.654 12:50:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.654 12:50:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.654 12:50:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.654 12:50:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.654 12:50:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.655 12:50:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.913 /dev/nbd0 00:06:10.913 12:50:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.913 12:50:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.913 12:50:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:10.913 12:50:51 -- common/autotest_common.sh@867 -- # local i 00:06:10.913 12:50:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.913 12:50:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.913 12:50:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:10.913 12:50:51 -- common/autotest_common.sh@871 -- # break 00:06:10.913 12:50:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.913 12:50:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.913 12:50:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.913 1+0 records in 00:06:10.913 1+0 records out 00:06:10.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288641 s, 14.2 MB/s 00:06:10.913 12:50:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.913 12:50:51 -- common/autotest_common.sh@884 -- # size=4096 00:06:10.913 12:50:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.913 12:50:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.913 12:50:51 -- common/autotest_common.sh@887 -- # return 0 00:06:10.913 12:50:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.913 12:50:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.913 12:50:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.173 /dev/nbd1 00:06:11.173 12:50:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.173 12:50:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.173 12:50:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:11.173 12:50:51 -- common/autotest_common.sh@867 -- # local i 00:06:11.173 12:50:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.173 12:50:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.173 12:50:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:11.173 12:50:51 -- common/autotest_common.sh@871 -- # break 00:06:11.173 12:50:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.173 12:50:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.173 12:50:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.173 1+0 records in 00:06:11.173 1+0 records out 00:06:11.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304718 s, 13.4 MB/s 00:06:11.173 12:50:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.173 12:50:51 -- common/autotest_common.sh@884 -- # size=4096 00:06:11.173 12:50:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.173 12:50:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.173 12:50:51 -- common/autotest_common.sh@887 -- # return 0 00:06:11.173 12:50:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.173 12:50:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.173 12:50:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.173 12:50:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.173 12:50:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.432 12:50:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.432 { 00:06:11.432 "bdev_name": "Malloc0", 00:06:11.432 "nbd_device": "/dev/nbd0" 00:06:11.432 }, 00:06:11.432 { 00:06:11.432 "bdev_name": "Malloc1", 00:06:11.432 "nbd_device": "/dev/nbd1" 00:06:11.432 } 00:06:11.432 ]' 00:06:11.432 12:50:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.432 { 00:06:11.432 "bdev_name": "Malloc0", 00:06:11.432 "nbd_device": "/dev/nbd0" 00:06:11.432 }, 00:06:11.432 { 00:06:11.432 "bdev_name": "Malloc1", 00:06:11.432 "nbd_device": "/dev/nbd1" 00:06:11.432 } 00:06:11.432 ]' 00:06:11.432 12:50:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.691 /dev/nbd1' 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.691 /dev/nbd1' 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.691 256+0 records in 00:06:11.691 256+0 records out 00:06:11.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00718319 s, 146 MB/s 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.691 256+0 records in 00:06:11.691 256+0 records out 00:06:11.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242431 s, 43.3 MB/s 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.691 256+0 records in 00:06:11.691 256+0 records out 00:06:11.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257694 s, 40.7 MB/s 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.691 12:50:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@41 -- # break 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.950 12:50:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@41 -- # break 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.209 12:50:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@65 -- # true 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.484 12:50:53 -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.484 12:50:53 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.755 12:50:53 -- event/event.sh@35 -- # sleep 3 00:06:13.014 [2024-12-13 12:50:53.662483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.014 [2024-12-13 12:50:53.703503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.014 [2024-12-13 12:50:53.703514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.014 [2024-12-13 12:50:53.754471] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.014 [2024-12-13 12:50:53.754541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.305 spdk_app_start Round 2 00:06:16.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.305 12:50:56 -- event/event.sh@23 -- # for i in {0..2} 00:06:16.305 12:50:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:16.305 12:50:56 -- event/event.sh@25 -- # waitforlisten 68544 /var/tmp/spdk-nbd.sock 00:06:16.305 12:50:56 -- common/autotest_common.sh@829 -- # '[' -z 68544 ']' 00:06:16.305 12:50:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.305 12:50:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.305 12:50:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.305 12:50:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.305 12:50:56 -- common/autotest_common.sh@10 -- # set +x 00:06:16.305 12:50:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.305 12:50:56 -- common/autotest_common.sh@862 -- # return 0 00:06:16.305 12:50:56 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.305 Malloc0 00:06:16.305 12:50:57 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.564 Malloc1 00:06:16.564 12:50:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@12 -- # local i 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.564 12:50:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.822 /dev/nbd0 00:06:16.822 12:50:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.822 12:50:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.822 12:50:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:16.822 12:50:57 -- common/autotest_common.sh@867 -- # local i 00:06:16.822 12:50:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.822 12:50:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.822 12:50:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:16.822 12:50:57 -- common/autotest_common.sh@871 -- # break 00:06:16.822 12:50:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.822 12:50:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.822 12:50:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.822 1+0 records in 00:06:16.822 1+0 records out 00:06:16.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212527 s, 19.3 MB/s 00:06:16.822 12:50:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.822 12:50:57 -- common/autotest_common.sh@884 -- # size=4096 00:06:16.822 12:50:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.822 12:50:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.822 12:50:57 -- common/autotest_common.sh@887 -- # return 0 00:06:16.822 12:50:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.822 12:50:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.822 12:50:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.081 /dev/nbd1 00:06:17.081 12:50:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.081 12:50:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.081 12:50:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:17.081 12:50:57 -- common/autotest_common.sh@867 -- # local i 00:06:17.081 12:50:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.081 12:50:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.081 12:50:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:17.339 12:50:57 -- common/autotest_common.sh@871 -- # break 00:06:17.339 12:50:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.339 12:50:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.339 12:50:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.339 1+0 records in 00:06:17.339 1+0 records out 00:06:17.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310945 s, 13.2 MB/s 00:06:17.339 12:50:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.339 12:50:57 -- common/autotest_common.sh@884 -- # size=4096 00:06:17.339 12:50:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.339 12:50:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.339 12:50:57 -- common/autotest_common.sh@887 -- # return 0 00:06:17.339 12:50:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.339 12:50:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.339 12:50:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.339 12:50:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.339 12:50:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.598 { 00:06:17.598 "bdev_name": "Malloc0", 00:06:17.598 "nbd_device": "/dev/nbd0" 00:06:17.598 }, 00:06:17.598 { 00:06:17.598 "bdev_name": "Malloc1", 00:06:17.598 "nbd_device": "/dev/nbd1" 00:06:17.598 } 00:06:17.598 ]' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.598 { 00:06:17.598 "bdev_name": "Malloc0", 00:06:17.598 "nbd_device": "/dev/nbd0" 00:06:17.598 }, 00:06:17.598 { 00:06:17.598 "bdev_name": "Malloc1", 00:06:17.598 "nbd_device": "/dev/nbd1" 00:06:17.598 } 00:06:17.598 ]' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.598 /dev/nbd1' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.598 /dev/nbd1' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.598 256+0 records in 00:06:17.598 256+0 records out 00:06:17.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00609554 s, 172 MB/s 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.598 256+0 records in 00:06:17.598 256+0 records out 00:06:17.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238547 s, 44.0 MB/s 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.598 256+0 records in 00:06:17.598 256+0 records out 00:06:17.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285767 s, 36.7 MB/s 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@51 -- # local i 00:06:17.598 12:50:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.599 12:50:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@41 -- # break 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.857 12:50:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@41 -- # break 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.115 12:50:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.373 12:50:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.373 12:50:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.373 12:50:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.373 12:50:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.373 12:50:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.373 12:50:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.374 12:50:59 -- bdev/nbd_common.sh@65 -- # true 00:06:18.374 12:50:59 -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.374 12:50:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.374 12:50:59 -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.374 12:50:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.374 12:50:59 -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.374 12:50:59 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.632 12:50:59 -- event/event.sh@35 -- # sleep 3 00:06:18.891 [2024-12-13 12:50:59.552124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.891 [2024-12-13 12:50:59.593406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.891 [2024-12-13 12:50:59.593417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.891 [2024-12-13 12:50:59.648796] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.891 [2024-12-13 12:50:59.648884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.176 12:51:02 -- event/event.sh@38 -- # waitforlisten 68544 /var/tmp/spdk-nbd.sock 00:06:22.176 12:51:02 -- common/autotest_common.sh@829 -- # '[' -z 68544 ']' 00:06:22.176 12:51:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.176 12:51:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.176 12:51:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.176 12:51:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.176 12:51:02 -- common/autotest_common.sh@10 -- # set +x 00:06:22.176 12:51:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.176 12:51:02 -- common/autotest_common.sh@862 -- # return 0 00:06:22.176 12:51:02 -- event/event.sh@39 -- # killprocess 68544 00:06:22.176 12:51:02 -- common/autotest_common.sh@936 -- # '[' -z 68544 ']' 00:06:22.176 12:51:02 -- common/autotest_common.sh@940 -- # kill -0 68544 00:06:22.176 12:51:02 -- common/autotest_common.sh@941 -- # uname 00:06:22.176 12:51:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.176 12:51:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68544 00:06:22.176 killing process with pid 68544 00:06:22.176 12:51:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.176 12:51:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.176 12:51:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68544' 00:06:22.176 12:51:02 -- common/autotest_common.sh@955 -- # kill 68544 00:06:22.176 12:51:02 -- common/autotest_common.sh@960 -- # wait 68544 00:06:22.176 spdk_app_start is called in Round 0. 00:06:22.176 Shutdown signal received, stop current app iteration 00:06:22.176 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:22.176 spdk_app_start is called in Round 1. 00:06:22.176 Shutdown signal received, stop current app iteration 00:06:22.176 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:22.176 spdk_app_start is called in Round 2. 00:06:22.176 Shutdown signal received, stop current app iteration 00:06:22.176 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:22.176 spdk_app_start is called in Round 3. 00:06:22.176 Shutdown signal received, stop current app iteration 00:06:22.176 12:51:02 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:22.176 12:51:02 -- event/event.sh@42 -- # return 0 00:06:22.176 00:06:22.176 real 0m18.866s 00:06:22.176 user 0m42.677s 00:06:22.176 sys 0m2.800s 00:06:22.176 12:51:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.176 ************************************ 00:06:22.176 END TEST app_repeat 00:06:22.176 ************************************ 00:06:22.176 12:51:02 -- common/autotest_common.sh@10 -- # set +x 00:06:22.176 12:51:02 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:22.176 12:51:02 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.176 12:51:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.176 12:51:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.176 12:51:02 -- common/autotest_common.sh@10 -- # set +x 00:06:22.176 ************************************ 00:06:22.176 START TEST cpu_locks 00:06:22.176 ************************************ 00:06:22.176 12:51:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.435 * Looking for test storage... 00:06:22.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:22.435 12:51:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:22.435 12:51:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:22.435 12:51:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:22.435 12:51:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:22.435 12:51:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:22.435 12:51:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:22.435 12:51:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:22.435 12:51:03 -- scripts/common.sh@335 -- # IFS=.-: 00:06:22.435 12:51:03 -- scripts/common.sh@335 -- # read -ra ver1 00:06:22.435 12:51:03 -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.435 12:51:03 -- scripts/common.sh@336 -- # read -ra ver2 00:06:22.435 12:51:03 -- scripts/common.sh@337 -- # local 'op=<' 00:06:22.435 12:51:03 -- scripts/common.sh@339 -- # ver1_l=2 00:06:22.435 12:51:03 -- scripts/common.sh@340 -- # ver2_l=1 00:06:22.435 12:51:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:22.435 12:51:03 -- scripts/common.sh@343 -- # case "$op" in 00:06:22.435 12:51:03 -- scripts/common.sh@344 -- # : 1 00:06:22.435 12:51:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:22.435 12:51:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.435 12:51:03 -- scripts/common.sh@364 -- # decimal 1 00:06:22.435 12:51:03 -- scripts/common.sh@352 -- # local d=1 00:06:22.435 12:51:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.435 12:51:03 -- scripts/common.sh@354 -- # echo 1 00:06:22.435 12:51:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:22.435 12:51:03 -- scripts/common.sh@365 -- # decimal 2 00:06:22.435 12:51:03 -- scripts/common.sh@352 -- # local d=2 00:06:22.435 12:51:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.435 12:51:03 -- scripts/common.sh@354 -- # echo 2 00:06:22.435 12:51:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:22.435 12:51:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:22.435 12:51:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:22.435 12:51:03 -- scripts/common.sh@367 -- # return 0 00:06:22.435 12:51:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.435 12:51:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:22.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.435 --rc genhtml_branch_coverage=1 00:06:22.435 --rc genhtml_function_coverage=1 00:06:22.435 --rc genhtml_legend=1 00:06:22.435 --rc geninfo_all_blocks=1 00:06:22.435 --rc geninfo_unexecuted_blocks=1 00:06:22.435 00:06:22.435 ' 00:06:22.435 12:51:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:22.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.435 --rc genhtml_branch_coverage=1 00:06:22.435 --rc genhtml_function_coverage=1 00:06:22.435 --rc genhtml_legend=1 00:06:22.435 --rc geninfo_all_blocks=1 00:06:22.435 --rc geninfo_unexecuted_blocks=1 00:06:22.435 00:06:22.435 ' 00:06:22.435 12:51:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:22.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.435 --rc genhtml_branch_coverage=1 00:06:22.435 --rc genhtml_function_coverage=1 00:06:22.435 --rc genhtml_legend=1 00:06:22.435 --rc geninfo_all_blocks=1 00:06:22.435 --rc geninfo_unexecuted_blocks=1 00:06:22.435 00:06:22.435 ' 00:06:22.435 12:51:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:22.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.435 --rc genhtml_branch_coverage=1 00:06:22.435 --rc genhtml_function_coverage=1 00:06:22.435 --rc genhtml_legend=1 00:06:22.435 --rc geninfo_all_blocks=1 00:06:22.435 --rc geninfo_unexecuted_blocks=1 00:06:22.435 00:06:22.435 ' 00:06:22.435 12:51:03 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:22.435 12:51:03 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:22.435 12:51:03 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:22.435 12:51:03 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:22.435 12:51:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.435 12:51:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.435 12:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:22.435 ************************************ 00:06:22.435 START TEST default_locks 00:06:22.435 ************************************ 00:06:22.435 12:51:03 -- common/autotest_common.sh@1114 -- # default_locks 00:06:22.435 12:51:03 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69175 00:06:22.435 12:51:03 -- event/cpu_locks.sh@47 -- # waitforlisten 69175 00:06:22.435 12:51:03 -- common/autotest_common.sh@829 -- # '[' -z 69175 ']' 00:06:22.435 12:51:03 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.435 12:51:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.435 12:51:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.435 12:51:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.435 12:51:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.435 12:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:22.435 [2024-12-13 12:51:03.173785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:22.435 [2024-12-13 12:51:03.173899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69175 ] 00:06:22.694 [2024-12-13 12:51:03.309529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.694 [2024-12-13 12:51:03.366486] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.694 [2024-12-13 12:51:03.366640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.629 12:51:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.629 12:51:04 -- common/autotest_common.sh@862 -- # return 0 00:06:23.629 12:51:04 -- event/cpu_locks.sh@49 -- # locks_exist 69175 00:06:23.629 12:51:04 -- event/cpu_locks.sh@22 -- # lslocks -p 69175 00:06:23.629 12:51:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.888 12:51:04 -- event/cpu_locks.sh@50 -- # killprocess 69175 00:06:23.888 12:51:04 -- common/autotest_common.sh@936 -- # '[' -z 69175 ']' 00:06:23.888 12:51:04 -- common/autotest_common.sh@940 -- # kill -0 69175 00:06:23.888 12:51:04 -- common/autotest_common.sh@941 -- # uname 00:06:23.888 12:51:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.888 12:51:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69175 00:06:23.888 12:51:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.888 12:51:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.888 killing process with pid 69175 00:06:23.888 12:51:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69175' 00:06:23.888 12:51:04 -- common/autotest_common.sh@955 -- # kill 69175 00:06:23.888 12:51:04 -- common/autotest_common.sh@960 -- # wait 69175 00:06:24.147 12:51:04 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69175 00:06:24.147 12:51:04 -- common/autotest_common.sh@650 -- # local es=0 00:06:24.147 12:51:04 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69175 00:06:24.147 12:51:04 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:24.147 12:51:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.147 12:51:04 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:24.147 12:51:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.147 12:51:04 -- common/autotest_common.sh@653 -- # waitforlisten 69175 00:06:24.147 12:51:04 -- common/autotest_common.sh@829 -- # '[' -z 69175 ']' 00:06:24.147 12:51:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.147 12:51:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.147 12:51:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.147 12:51:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.147 12:51:04 -- common/autotest_common.sh@10 -- # set +x 00:06:24.147 ERROR: process (pid: 69175) is no longer running 00:06:24.147 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69175) - No such process 00:06:24.147 12:51:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.147 12:51:04 -- common/autotest_common.sh@862 -- # return 1 00:06:24.147 12:51:04 -- common/autotest_common.sh@653 -- # es=1 00:06:24.147 12:51:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.147 12:51:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.147 12:51:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.147 12:51:04 -- event/cpu_locks.sh@54 -- # no_locks 00:06:24.147 12:51:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.147 12:51:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.147 12:51:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.147 00:06:24.147 real 0m1.691s 00:06:24.147 user 0m1.797s 00:06:24.147 sys 0m0.504s 00:06:24.147 12:51:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.147 ************************************ 00:06:24.147 12:51:04 -- common/autotest_common.sh@10 -- # set +x 00:06:24.147 END TEST default_locks 00:06:24.147 ************************************ 00:06:24.147 12:51:04 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:24.147 12:51:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.147 12:51:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.147 12:51:04 -- common/autotest_common.sh@10 -- # set +x 00:06:24.147 ************************************ 00:06:24.147 START TEST default_locks_via_rpc 00:06:24.147 ************************************ 00:06:24.147 12:51:04 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:24.147 12:51:04 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69238 00:06:24.147 12:51:04 -- event/cpu_locks.sh@63 -- # waitforlisten 69238 00:06:24.147 12:51:04 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.147 12:51:04 -- common/autotest_common.sh@829 -- # '[' -z 69238 ']' 00:06:24.147 12:51:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.147 12:51:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.147 12:51:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.147 12:51:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.147 12:51:04 -- common/autotest_common.sh@10 -- # set +x 00:06:24.147 [2024-12-13 12:51:04.901519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:24.147 [2024-12-13 12:51:04.901622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69238 ] 00:06:24.411 [2024-12-13 12:51:05.028529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.411 [2024-12-13 12:51:05.082962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.411 [2024-12-13 12:51:05.083147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.355 12:51:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.355 12:51:05 -- common/autotest_common.sh@862 -- # return 0 00:06:25.355 12:51:05 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:25.355 12:51:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.355 12:51:05 -- common/autotest_common.sh@10 -- # set +x 00:06:25.355 12:51:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.355 12:51:05 -- event/cpu_locks.sh@67 -- # no_locks 00:06:25.355 12:51:05 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.355 12:51:05 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.355 12:51:05 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.355 12:51:05 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.355 12:51:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.355 12:51:05 -- common/autotest_common.sh@10 -- # set +x 00:06:25.355 12:51:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.355 12:51:05 -- event/cpu_locks.sh@71 -- # locks_exist 69238 00:06:25.355 12:51:05 -- event/cpu_locks.sh@22 -- # lslocks -p 69238 00:06:25.355 12:51:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.614 12:51:06 -- event/cpu_locks.sh@73 -- # killprocess 69238 00:06:25.614 12:51:06 -- common/autotest_common.sh@936 -- # '[' -z 69238 ']' 00:06:25.614 12:51:06 -- common/autotest_common.sh@940 -- # kill -0 69238 00:06:25.614 12:51:06 -- common/autotest_common.sh@941 -- # uname 00:06:25.614 12:51:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.614 12:51:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69238 00:06:25.614 12:51:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.614 killing process with pid 69238 00:06:25.614 12:51:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.614 12:51:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69238' 00:06:25.614 12:51:06 -- common/autotest_common.sh@955 -- # kill 69238 00:06:25.614 12:51:06 -- common/autotest_common.sh@960 -- # wait 69238 00:06:26.181 00:06:26.181 real 0m1.856s 00:06:26.181 user 0m2.023s 00:06:26.181 sys 0m0.544s 00:06:26.181 12:51:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.181 12:51:06 -- common/autotest_common.sh@10 -- # set +x 00:06:26.181 ************************************ 00:06:26.182 END TEST default_locks_via_rpc 00:06:26.182 ************************************ 00:06:26.182 12:51:06 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:26.182 12:51:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.182 12:51:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.182 12:51:06 -- common/autotest_common.sh@10 -- # set +x 00:06:26.182 ************************************ 00:06:26.182 START TEST non_locking_app_on_locked_coremask 00:06:26.182 ************************************ 00:06:26.182 12:51:06 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:26.182 12:51:06 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69303 00:06:26.182 12:51:06 -- event/cpu_locks.sh@81 -- # waitforlisten 69303 /var/tmp/spdk.sock 00:06:26.182 12:51:06 -- common/autotest_common.sh@829 -- # '[' -z 69303 ']' 00:06:26.182 12:51:06 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.182 12:51:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.182 12:51:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.182 12:51:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.182 12:51:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.182 12:51:06 -- common/autotest_common.sh@10 -- # set +x 00:06:26.182 [2024-12-13 12:51:06.823390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:26.182 [2024-12-13 12:51:06.823488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69303 ] 00:06:26.182 [2024-12-13 12:51:06.957499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.440 [2024-12-13 12:51:07.012284] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.440 [2024-12-13 12:51:07.012460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.077 12:51:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.077 12:51:07 -- common/autotest_common.sh@862 -- # return 0 00:06:27.077 12:51:07 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.077 12:51:07 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69331 00:06:27.077 12:51:07 -- event/cpu_locks.sh@85 -- # waitforlisten 69331 /var/tmp/spdk2.sock 00:06:27.077 12:51:07 -- common/autotest_common.sh@829 -- # '[' -z 69331 ']' 00:06:27.077 12:51:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.077 12:51:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.077 12:51:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.077 12:51:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.077 12:51:07 -- common/autotest_common.sh@10 -- # set +x 00:06:27.077 [2024-12-13 12:51:07.851848] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.077 [2024-12-13 12:51:07.851966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69331 ] 00:06:27.335 [2024-12-13 12:51:07.990395] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.335 [2024-12-13 12:51:07.990424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.594 [2024-12-13 12:51:08.113535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.594 [2024-12-13 12:51:08.113709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.161 12:51:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.162 12:51:08 -- common/autotest_common.sh@862 -- # return 0 00:06:28.162 12:51:08 -- event/cpu_locks.sh@87 -- # locks_exist 69303 00:06:28.162 12:51:08 -- event/cpu_locks.sh@22 -- # lslocks -p 69303 00:06:28.162 12:51:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.097 12:51:09 -- event/cpu_locks.sh@89 -- # killprocess 69303 00:06:29.097 12:51:09 -- common/autotest_common.sh@936 -- # '[' -z 69303 ']' 00:06:29.097 12:51:09 -- common/autotest_common.sh@940 -- # kill -0 69303 00:06:29.097 12:51:09 -- common/autotest_common.sh@941 -- # uname 00:06:29.097 12:51:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.097 12:51:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69303 00:06:29.097 12:51:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.097 12:51:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.097 killing process with pid 69303 00:06:29.097 12:51:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69303' 00:06:29.097 12:51:09 -- common/autotest_common.sh@955 -- # kill 69303 00:06:29.097 12:51:09 -- common/autotest_common.sh@960 -- # wait 69303 00:06:29.665 12:51:10 -- event/cpu_locks.sh@90 -- # killprocess 69331 00:06:29.665 12:51:10 -- common/autotest_common.sh@936 -- # '[' -z 69331 ']' 00:06:29.665 12:51:10 -- common/autotest_common.sh@940 -- # kill -0 69331 00:06:29.665 12:51:10 -- common/autotest_common.sh@941 -- # uname 00:06:29.665 12:51:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.665 12:51:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69331 00:06:29.665 12:51:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.665 12:51:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.665 killing process with pid 69331 00:06:29.665 12:51:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69331' 00:06:29.666 12:51:10 -- common/autotest_common.sh@955 -- # kill 69331 00:06:29.666 12:51:10 -- common/autotest_common.sh@960 -- # wait 69331 00:06:30.233 00:06:30.233 real 0m3.970s 00:06:30.233 user 0m4.469s 00:06:30.233 sys 0m1.071s 00:06:30.233 12:51:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.233 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:06:30.233 ************************************ 00:06:30.233 END TEST non_locking_app_on_locked_coremask 00:06:30.233 ************************************ 00:06:30.233 12:51:10 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:30.233 12:51:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.233 12:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.233 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:06:30.234 ************************************ 00:06:30.234 START TEST locking_app_on_unlocked_coremask 00:06:30.234 ************************************ 00:06:30.234 12:51:10 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:30.234 12:51:10 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69410 00:06:30.234 12:51:10 -- event/cpu_locks.sh@99 -- # waitforlisten 69410 /var/tmp/spdk.sock 00:06:30.234 12:51:10 -- common/autotest_common.sh@829 -- # '[' -z 69410 ']' 00:06:30.234 12:51:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.234 12:51:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.234 12:51:10 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:30.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.234 12:51:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.234 12:51:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.234 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:06:30.234 [2024-12-13 12:51:10.844980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:30.234 [2024-12-13 12:51:10.845074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69410 ] 00:06:30.234 [2024-12-13 12:51:10.979129] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.234 [2024-12-13 12:51:10.979174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.493 [2024-12-13 12:51:11.034128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.493 [2024-12-13 12:51:11.034295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.061 12:51:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.061 12:51:11 -- common/autotest_common.sh@862 -- # return 0 00:06:31.320 12:51:11 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69438 00:06:31.320 12:51:11 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:31.320 12:51:11 -- event/cpu_locks.sh@103 -- # waitforlisten 69438 /var/tmp/spdk2.sock 00:06:31.320 12:51:11 -- common/autotest_common.sh@829 -- # '[' -z 69438 ']' 00:06:31.320 12:51:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.320 12:51:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.320 12:51:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.320 12:51:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.320 12:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:31.320 [2024-12-13 12:51:11.888942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:31.320 [2024-12-13 12:51:11.889042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69438 ] 00:06:31.320 [2024-12-13 12:51:12.020247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.579 [2024-12-13 12:51:12.161882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.579 [2024-12-13 12:51:12.161999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.146 12:51:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.146 12:51:12 -- common/autotest_common.sh@862 -- # return 0 00:06:32.146 12:51:12 -- event/cpu_locks.sh@105 -- # locks_exist 69438 00:06:32.146 12:51:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.146 12:51:12 -- event/cpu_locks.sh@22 -- # lslocks -p 69438 00:06:33.081 12:51:13 -- event/cpu_locks.sh@107 -- # killprocess 69410 00:06:33.081 12:51:13 -- common/autotest_common.sh@936 -- # '[' -z 69410 ']' 00:06:33.081 12:51:13 -- common/autotest_common.sh@940 -- # kill -0 69410 00:06:33.081 12:51:13 -- common/autotest_common.sh@941 -- # uname 00:06:33.081 12:51:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.081 12:51:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69410 00:06:33.081 12:51:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.081 12:51:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.081 killing process with pid 69410 00:06:33.081 12:51:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69410' 00:06:33.081 12:51:13 -- common/autotest_common.sh@955 -- # kill 69410 00:06:33.081 12:51:13 -- common/autotest_common.sh@960 -- # wait 69410 00:06:33.647 12:51:14 -- event/cpu_locks.sh@108 -- # killprocess 69438 00:06:33.647 12:51:14 -- common/autotest_common.sh@936 -- # '[' -z 69438 ']' 00:06:33.647 12:51:14 -- common/autotest_common.sh@940 -- # kill -0 69438 00:06:33.647 12:51:14 -- common/autotest_common.sh@941 -- # uname 00:06:33.647 12:51:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.647 12:51:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69438 00:06:33.906 12:51:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.906 12:51:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.906 killing process with pid 69438 00:06:33.906 12:51:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69438' 00:06:33.906 12:51:14 -- common/autotest_common.sh@955 -- # kill 69438 00:06:33.906 12:51:14 -- common/autotest_common.sh@960 -- # wait 69438 00:06:34.165 00:06:34.165 real 0m3.979s 00:06:34.165 user 0m4.514s 00:06:34.165 sys 0m1.081s 00:06:34.165 12:51:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.165 12:51:14 -- common/autotest_common.sh@10 -- # set +x 00:06:34.165 ************************************ 00:06:34.165 END TEST locking_app_on_unlocked_coremask 00:06:34.165 ************************************ 00:06:34.165 12:51:14 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:34.165 12:51:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.165 12:51:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.165 12:51:14 -- common/autotest_common.sh@10 -- # set +x 00:06:34.165 ************************************ 00:06:34.165 START TEST locking_app_on_locked_coremask 00:06:34.165 ************************************ 00:06:34.165 12:51:14 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:34.165 12:51:14 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69517 00:06:34.165 12:51:14 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.165 12:51:14 -- event/cpu_locks.sh@116 -- # waitforlisten 69517 /var/tmp/spdk.sock 00:06:34.165 12:51:14 -- common/autotest_common.sh@829 -- # '[' -z 69517 ']' 00:06:34.165 12:51:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.165 12:51:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.165 12:51:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.165 12:51:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.165 12:51:14 -- common/autotest_common.sh@10 -- # set +x 00:06:34.165 [2024-12-13 12:51:14.863230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:34.165 [2024-12-13 12:51:14.863308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69517 ] 00:06:34.424 [2024-12-13 12:51:14.989769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.424 [2024-12-13 12:51:15.043428] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.424 [2024-12-13 12:51:15.043621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.360 12:51:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.360 12:51:15 -- common/autotest_common.sh@862 -- # return 0 00:06:35.360 12:51:15 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69545 00:06:35.360 12:51:15 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69545 /var/tmp/spdk2.sock 00:06:35.360 12:51:15 -- common/autotest_common.sh@650 -- # local es=0 00:06:35.360 12:51:15 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.360 12:51:15 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69545 /var/tmp/spdk2.sock 00:06:35.360 12:51:15 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:35.360 12:51:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.360 12:51:15 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:35.360 12:51:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.360 12:51:15 -- common/autotest_common.sh@653 -- # waitforlisten 69545 /var/tmp/spdk2.sock 00:06:35.360 12:51:15 -- common/autotest_common.sh@829 -- # '[' -z 69545 ']' 00:06:35.360 12:51:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.360 12:51:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.360 12:51:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.360 12:51:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.360 12:51:15 -- common/autotest_common.sh@10 -- # set +x 00:06:35.360 [2024-12-13 12:51:15.928484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:35.360 [2024-12-13 12:51:15.928582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69545 ] 00:06:35.360 [2024-12-13 12:51:16.065133] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69517 has claimed it. 00:06:35.360 [2024-12-13 12:51:16.065198] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.927 ERROR: process (pid: 69545) is no longer running 00:06:35.927 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69545) - No such process 00:06:35.927 12:51:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.927 12:51:16 -- common/autotest_common.sh@862 -- # return 1 00:06:35.927 12:51:16 -- common/autotest_common.sh@653 -- # es=1 00:06:35.927 12:51:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.927 12:51:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.927 12:51:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.927 12:51:16 -- event/cpu_locks.sh@122 -- # locks_exist 69517 00:06:35.927 12:51:16 -- event/cpu_locks.sh@22 -- # lslocks -p 69517 00:06:35.927 12:51:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.521 12:51:17 -- event/cpu_locks.sh@124 -- # killprocess 69517 00:06:36.521 12:51:17 -- common/autotest_common.sh@936 -- # '[' -z 69517 ']' 00:06:36.521 12:51:17 -- common/autotest_common.sh@940 -- # kill -0 69517 00:06:36.521 12:51:17 -- common/autotest_common.sh@941 -- # uname 00:06:36.521 12:51:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.521 12:51:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69517 00:06:36.521 12:51:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.521 12:51:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.521 killing process with pid 69517 00:06:36.521 12:51:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69517' 00:06:36.521 12:51:17 -- common/autotest_common.sh@955 -- # kill 69517 00:06:36.521 12:51:17 -- common/autotest_common.sh@960 -- # wait 69517 00:06:36.796 00:06:36.796 real 0m2.630s 00:06:36.796 user 0m3.079s 00:06:36.796 sys 0m0.634s 00:06:36.796 12:51:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.796 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:06:36.796 ************************************ 00:06:36.796 END TEST locking_app_on_locked_coremask 00:06:36.796 ************************************ 00:06:36.796 12:51:17 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:36.796 12:51:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.796 12:51:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.796 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:06:36.796 ************************************ 00:06:36.796 START TEST locking_overlapped_coremask 00:06:36.796 ************************************ 00:06:36.796 12:51:17 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:36.796 12:51:17 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69596 00:06:36.796 12:51:17 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:36.796 12:51:17 -- event/cpu_locks.sh@133 -- # waitforlisten 69596 /var/tmp/spdk.sock 00:06:36.796 12:51:17 -- common/autotest_common.sh@829 -- # '[' -z 69596 ']' 00:06:36.796 12:51:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.796 12:51:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.796 12:51:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.796 12:51:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.796 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:06:36.796 [2024-12-13 12:51:17.557309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:36.796 [2024-12-13 12:51:17.557405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69596 ] 00:06:37.055 [2024-12-13 12:51:17.694677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.055 [2024-12-13 12:51:17.757551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.055 [2024-12-13 12:51:17.757859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.055 [2024-12-13 12:51:17.758169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.055 [2024-12-13 12:51:17.758179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.989 12:51:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.989 12:51:18 -- common/autotest_common.sh@862 -- # return 0 00:06:37.989 12:51:18 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69626 00:06:37.989 12:51:18 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:37.989 12:51:18 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69626 /var/tmp/spdk2.sock 00:06:37.989 12:51:18 -- common/autotest_common.sh@650 -- # local es=0 00:06:37.989 12:51:18 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69626 /var/tmp/spdk2.sock 00:06:37.989 12:51:18 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:37.989 12:51:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.989 12:51:18 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:37.989 12:51:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.989 12:51:18 -- common/autotest_common.sh@653 -- # waitforlisten 69626 /var/tmp/spdk2.sock 00:06:37.989 12:51:18 -- common/autotest_common.sh@829 -- # '[' -z 69626 ']' 00:06:37.989 12:51:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.989 12:51:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.989 12:51:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.989 12:51:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.989 12:51:18 -- common/autotest_common.sh@10 -- # set +x 00:06:37.989 [2024-12-13 12:51:18.595143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:37.989 [2024-12-13 12:51:18.595227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69626 ] 00:06:37.989 [2024-12-13 12:51:18.746869] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69596 has claimed it. 00:06:37.989 [2024-12-13 12:51:18.746966] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.924 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69626) - No such process 00:06:38.924 ERROR: process (pid: 69626) is no longer running 00:06:38.924 12:51:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.924 12:51:19 -- common/autotest_common.sh@862 -- # return 1 00:06:38.924 12:51:19 -- common/autotest_common.sh@653 -- # es=1 00:06:38.924 12:51:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.924 12:51:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:38.924 12:51:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.924 12:51:19 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.924 12:51:19 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.924 12:51:19 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.924 12:51:19 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.924 12:51:19 -- event/cpu_locks.sh@141 -- # killprocess 69596 00:06:38.924 12:51:19 -- common/autotest_common.sh@936 -- # '[' -z 69596 ']' 00:06:38.924 12:51:19 -- common/autotest_common.sh@940 -- # kill -0 69596 00:06:38.924 12:51:19 -- common/autotest_common.sh@941 -- # uname 00:06:38.924 12:51:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:38.924 12:51:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69596 00:06:38.924 12:51:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:38.924 12:51:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:38.924 killing process with pid 69596 00:06:38.924 12:51:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69596' 00:06:38.924 12:51:19 -- common/autotest_common.sh@955 -- # kill 69596 00:06:38.924 12:51:19 -- common/autotest_common.sh@960 -- # wait 69596 00:06:39.182 00:06:39.182 real 0m2.248s 00:06:39.182 user 0m6.422s 00:06:39.182 sys 0m0.453s 00:06:39.182 12:51:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.182 12:51:19 -- common/autotest_common.sh@10 -- # set +x 00:06:39.182 ************************************ 00:06:39.182 END TEST locking_overlapped_coremask 00:06:39.182 ************************************ 00:06:39.182 12:51:19 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.182 12:51:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.183 12:51:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.183 12:51:19 -- common/autotest_common.sh@10 -- # set +x 00:06:39.183 ************************************ 00:06:39.183 START TEST locking_overlapped_coremask_via_rpc 00:06:39.183 ************************************ 00:06:39.183 12:51:19 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:39.183 12:51:19 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69678 00:06:39.183 12:51:19 -- event/cpu_locks.sh@149 -- # waitforlisten 69678 /var/tmp/spdk.sock 00:06:39.183 12:51:19 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.183 12:51:19 -- common/autotest_common.sh@829 -- # '[' -z 69678 ']' 00:06:39.183 12:51:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.183 12:51:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.183 12:51:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.183 12:51:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.183 12:51:19 -- common/autotest_common.sh@10 -- # set +x 00:06:39.183 [2024-12-13 12:51:19.845494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:39.183 [2024-12-13 12:51:19.845581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69678 ] 00:06:39.440 [2024-12-13 12:51:19.973111] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.440 [2024-12-13 12:51:19.973160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.440 [2024-12-13 12:51:20.030233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.440 [2024-12-13 12:51:20.030586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.440 [2024-12-13 12:51:20.030852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.440 [2024-12-13 12:51:20.030856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.374 12:51:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.374 12:51:20 -- common/autotest_common.sh@862 -- # return 0 00:06:40.374 12:51:20 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69708 00:06:40.374 12:51:20 -- event/cpu_locks.sh@153 -- # waitforlisten 69708 /var/tmp/spdk2.sock 00:06:40.374 12:51:20 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.374 12:51:20 -- common/autotest_common.sh@829 -- # '[' -z 69708 ']' 00:06:40.375 12:51:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.375 12:51:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.375 12:51:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.375 12:51:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.375 12:51:20 -- common/autotest_common.sh@10 -- # set +x 00:06:40.375 [2024-12-13 12:51:20.939401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.375 [2024-12-13 12:51:20.939536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69708 ] 00:06:40.375 [2024-12-13 12:51:21.087946] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.375 [2024-12-13 12:51:21.088015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.633 [2024-12-13 12:51:21.286220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.633 [2024-12-13 12:51:21.286923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.633 [2024-12-13 12:51:21.287095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.633 [2024-12-13 12:51:21.287100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.199 12:51:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.199 12:51:21 -- common/autotest_common.sh@862 -- # return 0 00:06:41.199 12:51:21 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.199 12:51:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.199 12:51:21 -- common/autotest_common.sh@10 -- # set +x 00:06:41.199 12:51:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.199 12:51:21 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.199 12:51:21 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.199 12:51:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.199 12:51:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:41.199 12:51:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.199 12:51:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:41.199 12:51:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.199 12:51:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.199 12:51:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.199 12:51:21 -- common/autotest_common.sh@10 -- # set +x 00:06:41.199 [2024-12-13 12:51:21.935916] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69678 has claimed it. 00:06:41.199 2024/12/13 12:51:21 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:41.199 request: 00:06:41.199 { 00:06:41.199 "method": "framework_enable_cpumask_locks", 00:06:41.199 "params": {} 00:06:41.199 } 00:06:41.199 Got JSON-RPC error response 00:06:41.199 GoRPCClient: error on JSON-RPC call 00:06:41.199 12:51:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:41.199 12:51:21 -- common/autotest_common.sh@653 -- # es=1 00:06:41.199 12:51:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.199 12:51:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.199 12:51:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.199 12:51:21 -- event/cpu_locks.sh@158 -- # waitforlisten 69678 /var/tmp/spdk.sock 00:06:41.199 12:51:21 -- common/autotest_common.sh@829 -- # '[' -z 69678 ']' 00:06:41.199 12:51:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.199 12:51:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.199 12:51:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.199 12:51:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.199 12:51:21 -- common/autotest_common.sh@10 -- # set +x 00:06:41.457 12:51:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.457 12:51:22 -- common/autotest_common.sh@862 -- # return 0 00:06:41.457 12:51:22 -- event/cpu_locks.sh@159 -- # waitforlisten 69708 /var/tmp/spdk2.sock 00:06:41.457 12:51:22 -- common/autotest_common.sh@829 -- # '[' -z 69708 ']' 00:06:41.457 12:51:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.457 12:51:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.457 12:51:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.457 12:51:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.457 12:51:22 -- common/autotest_common.sh@10 -- # set +x 00:06:41.716 12:51:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.716 12:51:22 -- common/autotest_common.sh@862 -- # return 0 00:06:41.716 12:51:22 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.716 12:51:22 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.716 12:51:22 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.716 12:51:22 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.716 00:06:41.716 real 0m2.602s 00:06:41.716 user 0m1.325s 00:06:41.716 sys 0m0.225s 00:06:41.716 12:51:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.716 12:51:22 -- common/autotest_common.sh@10 -- # set +x 00:06:41.716 ************************************ 00:06:41.716 END TEST locking_overlapped_coremask_via_rpc 00:06:41.716 ************************************ 00:06:41.716 12:51:22 -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.716 12:51:22 -- event/cpu_locks.sh@15 -- # [[ -z 69678 ]] 00:06:41.716 12:51:22 -- event/cpu_locks.sh@15 -- # killprocess 69678 00:06:41.716 12:51:22 -- common/autotest_common.sh@936 -- # '[' -z 69678 ']' 00:06:41.716 12:51:22 -- common/autotest_common.sh@940 -- # kill -0 69678 00:06:41.716 12:51:22 -- common/autotest_common.sh@941 -- # uname 00:06:41.716 12:51:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.716 12:51:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69678 00:06:41.716 12:51:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.716 12:51:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.716 killing process with pid 69678 00:06:41.716 12:51:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69678' 00:06:41.716 12:51:22 -- common/autotest_common.sh@955 -- # kill 69678 00:06:41.716 12:51:22 -- common/autotest_common.sh@960 -- # wait 69678 00:06:42.282 12:51:22 -- event/cpu_locks.sh@16 -- # [[ -z 69708 ]] 00:06:42.282 12:51:22 -- event/cpu_locks.sh@16 -- # killprocess 69708 00:06:42.282 12:51:22 -- common/autotest_common.sh@936 -- # '[' -z 69708 ']' 00:06:42.282 12:51:22 -- common/autotest_common.sh@940 -- # kill -0 69708 00:06:42.282 12:51:22 -- common/autotest_common.sh@941 -- # uname 00:06:42.282 12:51:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:42.282 12:51:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69708 00:06:42.282 killing process with pid 69708 00:06:42.282 12:51:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:42.282 12:51:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:42.282 12:51:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69708' 00:06:42.282 12:51:22 -- common/autotest_common.sh@955 -- # kill 69708 00:06:42.282 12:51:22 -- common/autotest_common.sh@960 -- # wait 69708 00:06:42.848 12:51:23 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.848 12:51:23 -- event/cpu_locks.sh@1 -- # cleanup 00:06:42.848 12:51:23 -- event/cpu_locks.sh@15 -- # [[ -z 69678 ]] 00:06:42.848 12:51:23 -- event/cpu_locks.sh@15 -- # killprocess 69678 00:06:42.848 12:51:23 -- common/autotest_common.sh@936 -- # '[' -z 69678 ']' 00:06:42.848 12:51:23 -- common/autotest_common.sh@940 -- # kill -0 69678 00:06:42.848 Process with pid 69678 is not found 00:06:42.848 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69678) - No such process 00:06:42.848 12:51:23 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69678 is not found' 00:06:42.848 Process with pid 69708 is not found 00:06:42.848 12:51:23 -- event/cpu_locks.sh@16 -- # [[ -z 69708 ]] 00:06:42.848 12:51:23 -- event/cpu_locks.sh@16 -- # killprocess 69708 00:06:42.848 12:51:23 -- common/autotest_common.sh@936 -- # '[' -z 69708 ']' 00:06:42.848 12:51:23 -- common/autotest_common.sh@940 -- # kill -0 69708 00:06:42.848 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69708) - No such process 00:06:42.848 12:51:23 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69708 is not found' 00:06:42.848 12:51:23 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.848 00:06:42.848 real 0m20.457s 00:06:42.848 user 0m36.226s 00:06:42.848 sys 0m5.473s 00:06:42.848 12:51:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.848 12:51:23 -- common/autotest_common.sh@10 -- # set +x 00:06:42.848 ************************************ 00:06:42.848 END TEST cpu_locks 00:06:42.848 ************************************ 00:06:42.848 00:06:42.848 real 0m47.477s 00:06:42.848 user 1m31.087s 00:06:42.848 sys 0m9.039s 00:06:42.848 ************************************ 00:06:42.848 END TEST event 00:06:42.848 ************************************ 00:06:42.848 12:51:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.848 12:51:23 -- common/autotest_common.sh@10 -- # set +x 00:06:42.848 12:51:23 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:42.848 12:51:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.848 12:51:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.848 12:51:23 -- common/autotest_common.sh@10 -- # set +x 00:06:42.848 ************************************ 00:06:42.848 START TEST thread 00:06:42.848 ************************************ 00:06:42.848 12:51:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:42.848 * Looking for test storage... 00:06:42.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:42.848 12:51:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:42.848 12:51:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:42.848 12:51:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:42.848 12:51:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:42.848 12:51:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:42.848 12:51:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:42.848 12:51:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:42.848 12:51:23 -- scripts/common.sh@335 -- # IFS=.-: 00:06:42.848 12:51:23 -- scripts/common.sh@335 -- # read -ra ver1 00:06:42.848 12:51:23 -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.848 12:51:23 -- scripts/common.sh@336 -- # read -ra ver2 00:06:42.848 12:51:23 -- scripts/common.sh@337 -- # local 'op=<' 00:06:42.848 12:51:23 -- scripts/common.sh@339 -- # ver1_l=2 00:06:42.848 12:51:23 -- scripts/common.sh@340 -- # ver2_l=1 00:06:42.848 12:51:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:42.848 12:51:23 -- scripts/common.sh@343 -- # case "$op" in 00:06:42.848 12:51:23 -- scripts/common.sh@344 -- # : 1 00:06:42.848 12:51:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:42.848 12:51:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.848 12:51:23 -- scripts/common.sh@364 -- # decimal 1 00:06:42.848 12:51:23 -- scripts/common.sh@352 -- # local d=1 00:06:42.848 12:51:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.848 12:51:23 -- scripts/common.sh@354 -- # echo 1 00:06:43.107 12:51:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:43.107 12:51:23 -- scripts/common.sh@365 -- # decimal 2 00:06:43.107 12:51:23 -- scripts/common.sh@352 -- # local d=2 00:06:43.107 12:51:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.107 12:51:23 -- scripts/common.sh@354 -- # echo 2 00:06:43.107 12:51:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:43.107 12:51:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:43.107 12:51:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:43.107 12:51:23 -- scripts/common.sh@367 -- # return 0 00:06:43.107 12:51:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.107 12:51:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:43.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.107 --rc genhtml_branch_coverage=1 00:06:43.107 --rc genhtml_function_coverage=1 00:06:43.107 --rc genhtml_legend=1 00:06:43.107 --rc geninfo_all_blocks=1 00:06:43.107 --rc geninfo_unexecuted_blocks=1 00:06:43.107 00:06:43.107 ' 00:06:43.107 12:51:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:43.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.107 --rc genhtml_branch_coverage=1 00:06:43.107 --rc genhtml_function_coverage=1 00:06:43.107 --rc genhtml_legend=1 00:06:43.107 --rc geninfo_all_blocks=1 00:06:43.107 --rc geninfo_unexecuted_blocks=1 00:06:43.107 00:06:43.107 ' 00:06:43.107 12:51:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:43.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.107 --rc genhtml_branch_coverage=1 00:06:43.107 --rc genhtml_function_coverage=1 00:06:43.107 --rc genhtml_legend=1 00:06:43.107 --rc geninfo_all_blocks=1 00:06:43.107 --rc geninfo_unexecuted_blocks=1 00:06:43.107 00:06:43.107 ' 00:06:43.107 12:51:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:43.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.107 --rc genhtml_branch_coverage=1 00:06:43.107 --rc genhtml_function_coverage=1 00:06:43.107 --rc genhtml_legend=1 00:06:43.107 --rc geninfo_all_blocks=1 00:06:43.107 --rc geninfo_unexecuted_blocks=1 00:06:43.107 00:06:43.107 ' 00:06:43.107 12:51:23 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.107 12:51:23 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:43.107 12:51:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.107 12:51:23 -- common/autotest_common.sh@10 -- # set +x 00:06:43.107 ************************************ 00:06:43.107 START TEST thread_poller_perf 00:06:43.107 ************************************ 00:06:43.107 12:51:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.107 [2024-12-13 12:51:23.661618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:43.107 [2024-12-13 12:51:23.661706] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69861 ] 00:06:43.107 [2024-12-13 12:51:23.790852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.107 [2024-12-13 12:51:23.842457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.107 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:44.484 [2024-12-13T12:51:25.260Z] ====================================== 00:06:44.484 [2024-12-13T12:51:25.260Z] busy:2208441998 (cyc) 00:06:44.484 [2024-12-13T12:51:25.260Z] total_run_count: 382000 00:06:44.484 [2024-12-13T12:51:25.260Z] tsc_hz: 2200000000 (cyc) 00:06:44.484 [2024-12-13T12:51:25.260Z] ====================================== 00:06:44.484 [2024-12-13T12:51:25.260Z] poller_cost: 5781 (cyc), 2627 (nsec) 00:06:44.484 00:06:44.484 real 0m1.256s 00:06:44.484 user 0m1.104s 00:06:44.484 sys 0m0.046s 00:06:44.484 12:51:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.484 ************************************ 00:06:44.484 END TEST thread_poller_perf 00:06:44.484 ************************************ 00:06:44.484 12:51:24 -- common/autotest_common.sh@10 -- # set +x 00:06:44.484 12:51:24 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.484 12:51:24 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:44.484 12:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.484 12:51:24 -- common/autotest_common.sh@10 -- # set +x 00:06:44.484 ************************************ 00:06:44.484 START TEST thread_poller_perf 00:06:44.484 ************************************ 00:06:44.484 12:51:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.484 [2024-12-13 12:51:24.964854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.484 [2024-12-13 12:51:24.964948] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69897 ] 00:06:44.484 [2024-12-13 12:51:25.102934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.484 [2024-12-13 12:51:25.163244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.484 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.860 [2024-12-13T12:51:26.636Z] ====================================== 00:06:45.860 [2024-12-13T12:51:26.636Z] busy:2203283660 (cyc) 00:06:45.860 [2024-12-13T12:51:26.636Z] total_run_count: 5146000 00:06:45.860 [2024-12-13T12:51:26.636Z] tsc_hz: 2200000000 (cyc) 00:06:45.860 [2024-12-13T12:51:26.636Z] ====================================== 00:06:45.860 [2024-12-13T12:51:26.636Z] poller_cost: 428 (cyc), 194 (nsec) 00:06:45.860 ************************************ 00:06:45.860 END TEST thread_poller_perf 00:06:45.860 ************************************ 00:06:45.860 00:06:45.860 real 0m1.269s 00:06:45.860 user 0m1.112s 00:06:45.860 sys 0m0.050s 00:06:45.860 12:51:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.860 12:51:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.860 12:51:26 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:45.860 ************************************ 00:06:45.860 END TEST thread 00:06:45.860 ************************************ 00:06:45.860 00:06:45.860 real 0m2.790s 00:06:45.860 user 0m2.345s 00:06:45.860 sys 0m0.231s 00:06:45.860 12:51:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.860 12:51:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.860 12:51:26 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:45.860 12:51:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.860 12:51:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.860 12:51:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.860 ************************************ 00:06:45.860 START TEST accel 00:06:45.860 ************************************ 00:06:45.860 12:51:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:45.860 * Looking for test storage... 00:06:45.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:45.860 12:51:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:45.860 12:51:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:45.860 12:51:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:45.860 12:51:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:45.860 12:51:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:45.860 12:51:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:45.860 12:51:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:45.860 12:51:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:45.860 12:51:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:45.860 12:51:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.860 12:51:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:45.860 12:51:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:45.860 12:51:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:45.860 12:51:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:45.860 12:51:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:45.860 12:51:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:45.860 12:51:26 -- scripts/common.sh@344 -- # : 1 00:06:45.860 12:51:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:45.860 12:51:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.860 12:51:26 -- scripts/common.sh@364 -- # decimal 1 00:06:45.860 12:51:26 -- scripts/common.sh@352 -- # local d=1 00:06:45.860 12:51:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.860 12:51:26 -- scripts/common.sh@354 -- # echo 1 00:06:45.860 12:51:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:45.860 12:51:26 -- scripts/common.sh@365 -- # decimal 2 00:06:45.860 12:51:26 -- scripts/common.sh@352 -- # local d=2 00:06:45.860 12:51:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.860 12:51:26 -- scripts/common.sh@354 -- # echo 2 00:06:45.860 12:51:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:45.860 12:51:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:45.860 12:51:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:45.860 12:51:26 -- scripts/common.sh@367 -- # return 0 00:06:45.860 12:51:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.860 12:51:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:45.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.860 --rc genhtml_branch_coverage=1 00:06:45.860 --rc genhtml_function_coverage=1 00:06:45.860 --rc genhtml_legend=1 00:06:45.860 --rc geninfo_all_blocks=1 00:06:45.860 --rc geninfo_unexecuted_blocks=1 00:06:45.860 00:06:45.860 ' 00:06:45.860 12:51:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:45.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.860 --rc genhtml_branch_coverage=1 00:06:45.860 --rc genhtml_function_coverage=1 00:06:45.860 --rc genhtml_legend=1 00:06:45.860 --rc geninfo_all_blocks=1 00:06:45.860 --rc geninfo_unexecuted_blocks=1 00:06:45.860 00:06:45.860 ' 00:06:45.860 12:51:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:45.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.860 --rc genhtml_branch_coverage=1 00:06:45.860 --rc genhtml_function_coverage=1 00:06:45.860 --rc genhtml_legend=1 00:06:45.860 --rc geninfo_all_blocks=1 00:06:45.860 --rc geninfo_unexecuted_blocks=1 00:06:45.860 00:06:45.860 ' 00:06:45.860 12:51:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:45.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.860 --rc genhtml_branch_coverage=1 00:06:45.860 --rc genhtml_function_coverage=1 00:06:45.860 --rc genhtml_legend=1 00:06:45.860 --rc geninfo_all_blocks=1 00:06:45.860 --rc geninfo_unexecuted_blocks=1 00:06:45.860 00:06:45.860 ' 00:06:45.860 12:51:26 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:45.860 12:51:26 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:45.860 12:51:26 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.860 12:51:26 -- accel/accel.sh@59 -- # spdk_tgt_pid=69973 00:06:45.860 12:51:26 -- accel/accel.sh@60 -- # waitforlisten 69973 00:06:45.860 12:51:26 -- common/autotest_common.sh@829 -- # '[' -z 69973 ']' 00:06:45.860 12:51:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.860 12:51:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.860 12:51:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.860 12:51:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.860 12:51:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.860 12:51:26 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:45.860 12:51:26 -- accel/accel.sh@58 -- # build_accel_config 00:06:45.860 12:51:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.860 12:51:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.860 12:51:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.860 12:51:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.860 12:51:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.860 12:51:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.860 12:51:26 -- accel/accel.sh@42 -- # jq -r . 00:06:45.860 [2024-12-13 12:51:26.537468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:45.860 [2024-12-13 12:51:26.537567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69973 ] 00:06:46.119 [2024-12-13 12:51:26.673390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.119 [2024-12-13 12:51:26.736659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.119 [2024-12-13 12:51:26.736826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.055 12:51:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.055 12:51:27 -- common/autotest_common.sh@862 -- # return 0 00:06:47.055 12:51:27 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:47.055 12:51:27 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:47.055 12:51:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.055 12:51:27 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:47.055 12:51:27 -- common/autotest_common.sh@10 -- # set +x 00:06:47.055 12:51:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.055 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.055 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.055 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.055 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.055 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.055 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.055 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.055 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.055 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.055 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.055 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.055 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.056 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.056 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.056 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.056 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.056 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.056 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.056 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.056 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.056 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.056 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.056 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.056 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.056 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.056 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.056 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.056 12:51:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # IFS== 00:06:47.056 12:51:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:47.056 12:51:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:47.056 12:51:27 -- accel/accel.sh@67 -- # killprocess 69973 00:06:47.056 12:51:27 -- common/autotest_common.sh@936 -- # '[' -z 69973 ']' 00:06:47.056 12:51:27 -- common/autotest_common.sh@940 -- # kill -0 69973 00:06:47.056 12:51:27 -- common/autotest_common.sh@941 -- # uname 00:06:47.056 12:51:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.056 12:51:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69973 00:06:47.056 12:51:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:47.056 killing process with pid 69973 00:06:47.056 12:51:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:47.056 12:51:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69973' 00:06:47.056 12:51:27 -- common/autotest_common.sh@955 -- # kill 69973 00:06:47.056 12:51:27 -- common/autotest_common.sh@960 -- # wait 69973 00:06:47.315 12:51:28 -- accel/accel.sh@68 -- # trap - ERR 00:06:47.315 12:51:28 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:47.315 12:51:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:47.315 12:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.315 12:51:28 -- common/autotest_common.sh@10 -- # set +x 00:06:47.315 12:51:28 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:47.315 12:51:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:47.315 12:51:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.315 12:51:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.315 12:51:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.315 12:51:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.315 12:51:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.315 12:51:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.315 12:51:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.315 12:51:28 -- accel/accel.sh@42 -- # jq -r . 00:06:47.574 12:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.574 12:51:28 -- common/autotest_common.sh@10 -- # set +x 00:06:47.574 12:51:28 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:47.574 12:51:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:47.574 12:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.574 12:51:28 -- common/autotest_common.sh@10 -- # set +x 00:06:47.574 ************************************ 00:06:47.574 START TEST accel_missing_filename 00:06:47.574 ************************************ 00:06:47.574 12:51:28 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:47.574 12:51:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:47.574 12:51:28 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:47.574 12:51:28 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:47.574 12:51:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.574 12:51:28 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:47.574 12:51:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.574 12:51:28 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:47.574 12:51:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:47.574 12:51:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.574 12:51:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.574 12:51:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.574 12:51:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.574 12:51:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.574 12:51:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.574 12:51:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.574 12:51:28 -- accel/accel.sh@42 -- # jq -r . 00:06:47.574 [2024-12-13 12:51:28.174971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.574 [2024-12-13 12:51:28.175101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70048 ] 00:06:47.574 [2024-12-13 12:51:28.311539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.833 [2024-12-13 12:51:28.364844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.833 [2024-12-13 12:51:28.416427] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.833 [2024-12-13 12:51:28.486884] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:47.833 A filename is required. 00:06:47.833 12:51:28 -- common/autotest_common.sh@653 -- # es=234 00:06:47.833 12:51:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.833 12:51:28 -- common/autotest_common.sh@662 -- # es=106 00:06:47.833 12:51:28 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:47.833 12:51:28 -- common/autotest_common.sh@670 -- # es=1 00:06:47.833 12:51:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.833 00:06:47.833 real 0m0.407s 00:06:47.833 user 0m0.245s 00:06:47.833 sys 0m0.113s 00:06:47.833 12:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.833 12:51:28 -- common/autotest_common.sh@10 -- # set +x 00:06:47.833 ************************************ 00:06:47.833 END TEST accel_missing_filename 00:06:47.833 ************************************ 00:06:47.833 12:51:28 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.833 12:51:28 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:47.833 12:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.833 12:51:28 -- common/autotest_common.sh@10 -- # set +x 00:06:48.092 ************************************ 00:06:48.092 START TEST accel_compress_verify 00:06:48.092 ************************************ 00:06:48.092 12:51:28 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.092 12:51:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:48.092 12:51:28 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.092 12:51:28 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:48.092 12:51:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.092 12:51:28 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:48.092 12:51:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.092 12:51:28 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.092 12:51:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.092 12:51:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.092 12:51:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.092 12:51:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.092 12:51:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.092 12:51:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.092 12:51:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.092 12:51:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.092 12:51:28 -- accel/accel.sh@42 -- # jq -r . 00:06:48.092 [2024-12-13 12:51:28.636106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.093 [2024-12-13 12:51:28.636253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70067 ] 00:06:48.093 [2024-12-13 12:51:28.772329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.093 [2024-12-13 12:51:28.832330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.366 [2024-12-13 12:51:28.890072] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.366 [2024-12-13 12:51:28.961647] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:48.366 00:06:48.366 Compression does not support the verify option, aborting. 00:06:48.366 12:51:29 -- common/autotest_common.sh@653 -- # es=161 00:06:48.366 12:51:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.366 12:51:29 -- common/autotest_common.sh@662 -- # es=33 00:06:48.366 12:51:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:48.366 12:51:29 -- common/autotest_common.sh@670 -- # es=1 00:06:48.366 12:51:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.366 00:06:48.366 real 0m0.406s 00:06:48.366 user 0m0.228s 00:06:48.366 sys 0m0.115s 00:06:48.366 12:51:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.366 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:06:48.366 ************************************ 00:06:48.366 END TEST accel_compress_verify 00:06:48.366 ************************************ 00:06:48.366 12:51:29 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:48.366 12:51:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:48.366 12:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.366 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:06:48.366 ************************************ 00:06:48.366 START TEST accel_wrong_workload 00:06:48.366 ************************************ 00:06:48.366 12:51:29 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:48.366 12:51:29 -- common/autotest_common.sh@650 -- # local es=0 00:06:48.366 12:51:29 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:48.366 12:51:29 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:48.366 12:51:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.366 12:51:29 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:48.366 12:51:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.366 12:51:29 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:48.366 12:51:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:48.366 12:51:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.366 12:51:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.366 12:51:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.366 12:51:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.366 12:51:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.366 12:51:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.366 12:51:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.367 12:51:29 -- accel/accel.sh@42 -- # jq -r . 00:06:48.367 Unsupported workload type: foobar 00:06:48.367 [2024-12-13 12:51:29.088021] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:48.367 accel_perf options: 00:06:48.367 [-h help message] 00:06:48.367 [-q queue depth per core] 00:06:48.367 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.367 [-T number of threads per core 00:06:48.367 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.367 [-t time in seconds] 00:06:48.367 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.367 [ dif_verify, , dif_generate, dif_generate_copy 00:06:48.367 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.367 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.367 [-S for crc32c workload, use this seed value (default 0) 00:06:48.367 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.367 [-f for fill workload, use this BYTE value (default 255) 00:06:48.367 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.367 [-y verify result if this switch is on] 00:06:48.367 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.367 Can be used to spread operations across a wider range of memory. 00:06:48.367 12:51:29 -- common/autotest_common.sh@653 -- # es=1 00:06:48.367 12:51:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.367 12:51:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.367 12:51:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.367 00:06:48.367 real 0m0.027s 00:06:48.367 user 0m0.015s 00:06:48.367 sys 0m0.012s 00:06:48.367 12:51:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.367 ************************************ 00:06:48.367 END TEST accel_wrong_workload 00:06:48.367 ************************************ 00:06:48.367 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:06:48.367 12:51:29 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.367 12:51:29 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:48.367 12:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.367 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:06:48.647 ************************************ 00:06:48.647 START TEST accel_negative_buffers 00:06:48.647 ************************************ 00:06:48.647 12:51:29 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.647 12:51:29 -- common/autotest_common.sh@650 -- # local es=0 00:06:48.647 12:51:29 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:48.647 12:51:29 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:48.647 12:51:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.647 12:51:29 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:48.647 12:51:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.647 12:51:29 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:48.647 12:51:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:48.647 12:51:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.647 12:51:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.647 12:51:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.647 12:51:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.647 12:51:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.647 12:51:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.647 12:51:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.647 12:51:29 -- accel/accel.sh@42 -- # jq -r . 00:06:48.647 -x option must be non-negative. 00:06:48.647 [2024-12-13 12:51:29.162541] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:48.647 accel_perf options: 00:06:48.647 [-h help message] 00:06:48.647 [-q queue depth per core] 00:06:48.647 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.647 [-T number of threads per core 00:06:48.647 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.647 [-t time in seconds] 00:06:48.647 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.647 [ dif_verify, , dif_generate, dif_generate_copy 00:06:48.647 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.647 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.647 [-S for crc32c workload, use this seed value (default 0) 00:06:48.647 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.647 [-f for fill workload, use this BYTE value (default 255) 00:06:48.647 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.647 [-y verify result if this switch is on] 00:06:48.647 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.647 Can be used to spread operations across a wider range of memory. 00:06:48.647 12:51:29 -- common/autotest_common.sh@653 -- # es=1 00:06:48.647 12:51:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.647 12:51:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.647 12:51:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.647 00:06:48.647 real 0m0.028s 00:06:48.647 user 0m0.017s 00:06:48.647 sys 0m0.010s 00:06:48.647 12:51:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.647 ************************************ 00:06:48.647 END TEST accel_negative_buffers 00:06:48.647 ************************************ 00:06:48.647 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:06:48.647 12:51:29 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:48.647 12:51:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:48.647 12:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.647 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:06:48.647 ************************************ 00:06:48.647 START TEST accel_crc32c 00:06:48.647 ************************************ 00:06:48.647 12:51:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:48.647 12:51:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.647 12:51:29 -- accel/accel.sh@17 -- # local accel_module 00:06:48.647 12:51:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:48.647 12:51:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:48.647 12:51:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.647 12:51:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.647 12:51:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.647 12:51:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.647 12:51:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.647 12:51:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.647 12:51:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.647 12:51:29 -- accel/accel.sh@42 -- # jq -r . 00:06:48.647 [2024-12-13 12:51:29.236539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.647 [2024-12-13 12:51:29.236624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70131 ] 00:06:48.647 [2024-12-13 12:51:29.374717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.905 [2024-12-13 12:51:29.436227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.282 12:51:30 -- accel/accel.sh@18 -- # out=' 00:06:50.282 SPDK Configuration: 00:06:50.282 Core mask: 0x1 00:06:50.282 00:06:50.282 Accel Perf Configuration: 00:06:50.282 Workload Type: crc32c 00:06:50.282 CRC-32C seed: 32 00:06:50.282 Transfer size: 4096 bytes 00:06:50.282 Vector count 1 00:06:50.282 Module: software 00:06:50.282 Queue depth: 32 00:06:50.282 Allocate depth: 32 00:06:50.282 # threads/core: 1 00:06:50.282 Run time: 1 seconds 00:06:50.282 Verify: Yes 00:06:50.282 00:06:50.282 Running for 1 seconds... 00:06:50.282 00:06:50.282 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.282 ------------------------------------------------------------------------------------ 00:06:50.282 0,0 534400/s 2087 MiB/s 0 0 00:06:50.282 ==================================================================================== 00:06:50.282 Total 534400/s 2087 MiB/s 0 0' 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:50.282 12:51:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.282 12:51:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.282 12:51:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.282 12:51:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.282 12:51:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.282 12:51:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.282 12:51:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.282 12:51:30 -- accel/accel.sh@42 -- # jq -r . 00:06:50.282 [2024-12-13 12:51:30.644606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.282 [2024-12-13 12:51:30.644701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70145 ] 00:06:50.282 [2024-12-13 12:51:30.776187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.282 [2024-12-13 12:51:30.832194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val= 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val= 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val=0x1 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val= 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val= 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val=crc32c 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val=32 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val= 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val=software 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val=32 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val=32 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val=1 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val=Yes 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val= 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.282 12:51:30 -- accel/accel.sh@21 -- # val= 00:06:50.282 12:51:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.282 12:51:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.660 12:51:32 -- accel/accel.sh@21 -- # val= 00:06:51.660 12:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.660 12:51:32 -- accel/accel.sh@21 -- # val= 00:06:51.660 12:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.660 12:51:32 -- accel/accel.sh@21 -- # val= 00:06:51.660 12:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.660 12:51:32 -- accel/accel.sh@21 -- # val= 00:06:51.660 12:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.660 12:51:32 -- accel/accel.sh@21 -- # val= 00:06:51.660 12:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.660 12:51:32 -- accel/accel.sh@21 -- # val= 00:06:51.660 12:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.660 12:51:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.660 12:51:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.660 12:51:32 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:51.660 12:51:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.660 00:06:51.660 real 0m2.805s 00:06:51.660 user 0m2.395s 00:06:51.660 sys 0m0.211s 00:06:51.660 12:51:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.660 12:51:32 -- common/autotest_common.sh@10 -- # set +x 00:06:51.660 ************************************ 00:06:51.660 END TEST accel_crc32c 00:06:51.660 ************************************ 00:06:51.660 12:51:32 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:51.660 12:51:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:51.660 12:51:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.660 12:51:32 -- common/autotest_common.sh@10 -- # set +x 00:06:51.660 ************************************ 00:06:51.660 START TEST accel_crc32c_C2 00:06:51.660 ************************************ 00:06:51.660 12:51:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:51.660 12:51:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.660 12:51:32 -- accel/accel.sh@17 -- # local accel_module 00:06:51.660 12:51:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:51.660 12:51:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:51.660 12:51:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.660 12:51:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.660 12:51:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.660 12:51:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.660 12:51:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.660 12:51:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.660 12:51:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.660 12:51:32 -- accel/accel.sh@42 -- # jq -r . 00:06:51.660 [2024-12-13 12:51:32.093394] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.660 [2024-12-13 12:51:32.093501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70185 ] 00:06:51.660 [2024-12-13 12:51:32.225126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.660 [2024-12-13 12:51:32.273693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.037 12:51:33 -- accel/accel.sh@18 -- # out=' 00:06:53.037 SPDK Configuration: 00:06:53.037 Core mask: 0x1 00:06:53.037 00:06:53.037 Accel Perf Configuration: 00:06:53.037 Workload Type: crc32c 00:06:53.037 CRC-32C seed: 0 00:06:53.037 Transfer size: 4096 bytes 00:06:53.037 Vector count 2 00:06:53.037 Module: software 00:06:53.037 Queue depth: 32 00:06:53.037 Allocate depth: 32 00:06:53.037 # threads/core: 1 00:06:53.037 Run time: 1 seconds 00:06:53.037 Verify: Yes 00:06:53.037 00:06:53.037 Running for 1 seconds... 00:06:53.037 00:06:53.037 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.037 ------------------------------------------------------------------------------------ 00:06:53.037 0,0 428864/s 3350 MiB/s 0 0 00:06:53.037 ==================================================================================== 00:06:53.037 Total 428864/s 1675 MiB/s 0 0' 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:53.037 12:51:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:53.037 12:51:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.037 12:51:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.037 12:51:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.037 12:51:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.037 12:51:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.037 12:51:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.037 12:51:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.037 12:51:33 -- accel/accel.sh@42 -- # jq -r . 00:06:53.037 [2024-12-13 12:51:33.475001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:53.037 [2024-12-13 12:51:33.475565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70199 ] 00:06:53.037 [2024-12-13 12:51:33.609619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.037 [2024-12-13 12:51:33.655979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val= 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val= 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val=0x1 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val= 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val= 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val=crc32c 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val=0 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val= 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val=software 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val=32 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val=32 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val=1 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val=Yes 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val= 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.037 12:51:33 -- accel/accel.sh@21 -- # val= 00:06:53.037 12:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.037 12:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.415 12:51:34 -- accel/accel.sh@21 -- # val= 00:06:54.415 12:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:54.415 12:51:34 -- accel/accel.sh@21 -- # val= 00:06:54.415 12:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:54.415 12:51:34 -- accel/accel.sh@21 -- # val= 00:06:54.415 12:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:54.415 12:51:34 -- accel/accel.sh@21 -- # val= 00:06:54.415 12:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:54.415 12:51:34 -- accel/accel.sh@21 -- # val= 00:06:54.415 12:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:54.415 12:51:34 -- accel/accel.sh@21 -- # val= 00:06:54.415 12:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:54.415 12:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:54.415 12:51:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.415 12:51:34 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:54.415 12:51:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.415 00:06:54.415 real 0m2.767s 00:06:54.415 user 0m2.377s 00:06:54.415 sys 0m0.192s 00:06:54.415 12:51:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.415 ************************************ 00:06:54.415 END TEST accel_crc32c_C2 00:06:54.415 ************************************ 00:06:54.415 12:51:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.415 12:51:34 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:54.415 12:51:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:54.415 12:51:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.415 12:51:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.415 ************************************ 00:06:54.415 START TEST accel_copy 00:06:54.415 ************************************ 00:06:54.415 12:51:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:54.415 12:51:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.415 12:51:34 -- accel/accel.sh@17 -- # local accel_module 00:06:54.415 12:51:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:54.415 12:51:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:54.415 12:51:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.415 12:51:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.415 12:51:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.415 12:51:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.415 12:51:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.415 12:51:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.415 12:51:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.415 12:51:34 -- accel/accel.sh@42 -- # jq -r . 00:06:54.415 [2024-12-13 12:51:34.912043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:54.415 [2024-12-13 12:51:34.912314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70228 ] 00:06:54.415 [2024-12-13 12:51:35.046699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.415 [2024-12-13 12:51:35.101880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.791 12:51:36 -- accel/accel.sh@18 -- # out=' 00:06:55.791 SPDK Configuration: 00:06:55.791 Core mask: 0x1 00:06:55.791 00:06:55.791 Accel Perf Configuration: 00:06:55.791 Workload Type: copy 00:06:55.791 Transfer size: 4096 bytes 00:06:55.791 Vector count 1 00:06:55.791 Module: software 00:06:55.791 Queue depth: 32 00:06:55.791 Allocate depth: 32 00:06:55.791 # threads/core: 1 00:06:55.791 Run time: 1 seconds 00:06:55.791 Verify: Yes 00:06:55.791 00:06:55.791 Running for 1 seconds... 00:06:55.791 00:06:55.791 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.791 ------------------------------------------------------------------------------------ 00:06:55.791 0,0 387040/s 1511 MiB/s 0 0 00:06:55.791 ==================================================================================== 00:06:55.791 Total 387040/s 1511 MiB/s 0 0' 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:55.791 12:51:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:55.791 12:51:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.791 12:51:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.791 12:51:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.791 12:51:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.791 12:51:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.791 12:51:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.791 12:51:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.791 12:51:36 -- accel/accel.sh@42 -- # jq -r . 00:06:55.791 [2024-12-13 12:51:36.327273] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.791 [2024-12-13 12:51:36.327587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70255 ] 00:06:55.791 [2024-12-13 12:51:36.455748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.791 [2024-12-13 12:51:36.502026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val= 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val= 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val=0x1 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val= 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val= 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val=copy 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val= 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val=software 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val=32 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val=32 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val=1 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val=Yes 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val= 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.791 12:51:36 -- accel/accel.sh@21 -- # val= 00:06:55.791 12:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # IFS=: 00:06:55.791 12:51:36 -- accel/accel.sh@20 -- # read -r var val 00:06:57.168 12:51:37 -- accel/accel.sh@21 -- # val= 00:06:57.168 12:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:57.168 12:51:37 -- accel/accel.sh@21 -- # val= 00:06:57.168 12:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:57.168 12:51:37 -- accel/accel.sh@21 -- # val= 00:06:57.168 12:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:57.168 12:51:37 -- accel/accel.sh@21 -- # val= 00:06:57.168 12:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:57.168 12:51:37 -- accel/accel.sh@21 -- # val= 00:06:57.168 12:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:57.168 12:51:37 -- accel/accel.sh@21 -- # val= 00:06:57.168 12:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:57.168 12:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:57.168 12:51:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.168 12:51:37 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:57.168 12:51:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.168 00:06:57.168 real 0m2.790s 00:06:57.168 user 0m2.383s 00:06:57.168 sys 0m0.209s 00:06:57.168 12:51:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.168 12:51:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.168 ************************************ 00:06:57.168 END TEST accel_copy 00:06:57.168 ************************************ 00:06:57.168 12:51:37 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.168 12:51:37 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:57.168 12:51:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.168 12:51:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.168 ************************************ 00:06:57.168 START TEST accel_fill 00:06:57.168 ************************************ 00:06:57.168 12:51:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.168 12:51:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.168 12:51:37 -- accel/accel.sh@17 -- # local accel_module 00:06:57.168 12:51:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.168 12:51:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.168 12:51:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.168 12:51:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.168 12:51:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.168 12:51:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.168 12:51:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.168 12:51:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.168 12:51:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.168 12:51:37 -- accel/accel.sh@42 -- # jq -r . 00:06:57.168 [2024-12-13 12:51:37.752662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.168 [2024-12-13 12:51:37.752797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70284 ] 00:06:57.169 [2024-12-13 12:51:37.885972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.169 [2024-12-13 12:51:37.934420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.545 12:51:39 -- accel/accel.sh@18 -- # out=' 00:06:58.545 SPDK Configuration: 00:06:58.545 Core mask: 0x1 00:06:58.545 00:06:58.545 Accel Perf Configuration: 00:06:58.545 Workload Type: fill 00:06:58.545 Fill pattern: 0x80 00:06:58.545 Transfer size: 4096 bytes 00:06:58.545 Vector count 1 00:06:58.545 Module: software 00:06:58.545 Queue depth: 64 00:06:58.545 Allocate depth: 64 00:06:58.545 # threads/core: 1 00:06:58.545 Run time: 1 seconds 00:06:58.545 Verify: Yes 00:06:58.545 00:06:58.545 Running for 1 seconds... 00:06:58.545 00:06:58.545 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.545 ------------------------------------------------------------------------------------ 00:06:58.545 0,0 564992/s 2207 MiB/s 0 0 00:06:58.545 ==================================================================================== 00:06:58.545 Total 564992/s 2207 MiB/s 0 0' 00:06:58.545 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.545 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.545 12:51:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.545 12:51:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.545 12:51:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.545 12:51:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.545 12:51:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.545 12:51:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.545 12:51:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.545 12:51:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.545 12:51:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.545 12:51:39 -- accel/accel.sh@42 -- # jq -r . 00:06:58.545 [2024-12-13 12:51:39.142296] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:58.545 [2024-12-13 12:51:39.142407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70304 ] 00:06:58.545 [2024-12-13 12:51:39.278503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.804 [2024-12-13 12:51:39.333985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.804 12:51:39 -- accel/accel.sh@21 -- # val= 00:06:58.804 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.804 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.804 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.804 12:51:39 -- accel/accel.sh@21 -- # val= 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val=0x1 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val= 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val= 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val=fill 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val=0x80 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val= 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val=software 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val=64 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val=64 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val=1 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val=Yes 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val= 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.808 12:51:39 -- accel/accel.sh@21 -- # val= 00:06:58.808 12:51:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:58.808 12:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.745 12:51:40 -- accel/accel.sh@21 -- # val= 00:06:59.745 12:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.745 12:51:40 -- accel/accel.sh@21 -- # val= 00:06:59.745 12:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.745 12:51:40 -- accel/accel.sh@21 -- # val= 00:06:59.745 12:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.745 12:51:40 -- accel/accel.sh@21 -- # val= 00:06:59.745 12:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.745 12:51:40 -- accel/accel.sh@21 -- # val= 00:06:59.745 12:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.745 12:51:40 -- accel/accel.sh@21 -- # val= 00:06:59.745 12:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.745 12:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.745 12:51:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.745 12:51:40 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:59.745 12:51:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.745 00:06:59.745 real 0m2.783s 00:06:59.745 user 0m2.366s 00:06:59.745 sys 0m0.217s 00:06:59.745 12:51:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.745 12:51:40 -- common/autotest_common.sh@10 -- # set +x 00:06:59.745 ************************************ 00:06:59.745 END TEST accel_fill 00:06:59.745 ************************************ 00:07:00.004 12:51:40 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:00.004 12:51:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:00.004 12:51:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.004 12:51:40 -- common/autotest_common.sh@10 -- # set +x 00:07:00.004 ************************************ 00:07:00.004 START TEST accel_copy_crc32c 00:07:00.004 ************************************ 00:07:00.004 12:51:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:00.004 12:51:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.004 12:51:40 -- accel/accel.sh@17 -- # local accel_module 00:07:00.004 12:51:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:00.004 12:51:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:00.004 12:51:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.004 12:51:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.004 12:51:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.004 12:51:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.004 12:51:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.004 12:51:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.004 12:51:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.004 12:51:40 -- accel/accel.sh@42 -- # jq -r . 00:07:00.004 [2024-12-13 12:51:40.581689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.004 [2024-12-13 12:51:40.581802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70338 ] 00:07:00.004 [2024-12-13 12:51:40.703392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.004 [2024-12-13 12:51:40.751711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.382 12:51:41 -- accel/accel.sh@18 -- # out=' 00:07:01.382 SPDK Configuration: 00:07:01.382 Core mask: 0x1 00:07:01.382 00:07:01.382 Accel Perf Configuration: 00:07:01.382 Workload Type: copy_crc32c 00:07:01.382 CRC-32C seed: 0 00:07:01.382 Vector size: 4096 bytes 00:07:01.382 Transfer size: 4096 bytes 00:07:01.382 Vector count 1 00:07:01.382 Module: software 00:07:01.382 Queue depth: 32 00:07:01.382 Allocate depth: 32 00:07:01.382 # threads/core: 1 00:07:01.382 Run time: 1 seconds 00:07:01.382 Verify: Yes 00:07:01.382 00:07:01.382 Running for 1 seconds... 00:07:01.382 00:07:01.382 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.382 ------------------------------------------------------------------------------------ 00:07:01.382 0,0 310336/s 1212 MiB/s 0 0 00:07:01.382 ==================================================================================== 00:07:01.382 Total 310336/s 1212 MiB/s 0 0' 00:07:01.382 12:51:41 -- accel/accel.sh@20 -- # IFS=: 00:07:01.382 12:51:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:01.382 12:51:41 -- accel/accel.sh@20 -- # read -r var val 00:07:01.382 12:51:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:01.382 12:51:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.382 12:51:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.382 12:51:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.382 12:51:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.382 12:51:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.382 12:51:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.382 12:51:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.382 12:51:41 -- accel/accel.sh@42 -- # jq -r . 00:07:01.382 [2024-12-13 12:51:41.956539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.382 [2024-12-13 12:51:41.956640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70352 ] 00:07:01.382 [2024-12-13 12:51:42.091818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.382 [2024-12-13 12:51:42.139487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val= 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val= 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val=0x1 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val= 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val= 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val=0 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val= 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val=software 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val=32 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val=32 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val=1 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val=Yes 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val= 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:01.645 12:51:42 -- accel/accel.sh@21 -- # val= 00:07:01.645 12:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # IFS=: 00:07:01.645 12:51:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.586 12:51:43 -- accel/accel.sh@21 -- # val= 00:07:02.586 12:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.586 12:51:43 -- accel/accel.sh@20 -- # IFS=: 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # read -r var val 00:07:02.587 12:51:43 -- accel/accel.sh@21 -- # val= 00:07:02.587 12:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # IFS=: 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # read -r var val 00:07:02.587 12:51:43 -- accel/accel.sh@21 -- # val= 00:07:02.587 12:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # IFS=: 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # read -r var val 00:07:02.587 12:51:43 -- accel/accel.sh@21 -- # val= 00:07:02.587 12:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # IFS=: 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # read -r var val 00:07:02.587 12:51:43 -- accel/accel.sh@21 -- # val= 00:07:02.587 12:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # IFS=: 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # read -r var val 00:07:02.587 12:51:43 -- accel/accel.sh@21 -- # val= 00:07:02.587 12:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # IFS=: 00:07:02.587 12:51:43 -- accel/accel.sh@20 -- # read -r var val 00:07:02.587 12:51:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.587 12:51:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:02.587 12:51:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.587 00:07:02.587 real 0m2.756s 00:07:02.587 user 0m2.359s 00:07:02.587 sys 0m0.202s 00:07:02.587 12:51:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.587 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:07:02.587 ************************************ 00:07:02.587 END TEST accel_copy_crc32c 00:07:02.587 ************************************ 00:07:02.587 12:51:43 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:02.587 12:51:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:02.587 12:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.587 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:07:02.846 ************************************ 00:07:02.846 START TEST accel_copy_crc32c_C2 00:07:02.846 ************************************ 00:07:02.846 12:51:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:02.846 12:51:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.846 12:51:43 -- accel/accel.sh@17 -- # local accel_module 00:07:02.846 12:51:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:02.846 12:51:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:02.846 12:51:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.846 12:51:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.846 12:51:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.846 12:51:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.846 12:51:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.846 12:51:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.846 12:51:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.846 12:51:43 -- accel/accel.sh@42 -- # jq -r . 00:07:02.846 [2024-12-13 12:51:43.393414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:02.846 [2024-12-13 12:51:43.393512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70387 ] 00:07:02.846 [2024-12-13 12:51:43.522291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.846 [2024-12-13 12:51:43.573249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.223 12:51:44 -- accel/accel.sh@18 -- # out=' 00:07:04.223 SPDK Configuration: 00:07:04.223 Core mask: 0x1 00:07:04.223 00:07:04.223 Accel Perf Configuration: 00:07:04.223 Workload Type: copy_crc32c 00:07:04.223 CRC-32C seed: 0 00:07:04.223 Vector size: 4096 bytes 00:07:04.223 Transfer size: 8192 bytes 00:07:04.223 Vector count 2 00:07:04.223 Module: software 00:07:04.223 Queue depth: 32 00:07:04.223 Allocate depth: 32 00:07:04.223 # threads/core: 1 00:07:04.223 Run time: 1 seconds 00:07:04.223 Verify: Yes 00:07:04.223 00:07:04.223 Running for 1 seconds... 00:07:04.223 00:07:04.223 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.223 ------------------------------------------------------------------------------------ 00:07:04.223 0,0 218208/s 1704 MiB/s 0 0 00:07:04.223 ==================================================================================== 00:07:04.223 Total 218208/s 852 MiB/s 0 0' 00:07:04.223 12:51:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.223 12:51:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.223 12:51:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:04.223 12:51:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:04.223 12:51:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.223 12:51:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.223 12:51:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.223 12:51:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.223 12:51:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.223 12:51:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.223 12:51:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.223 12:51:44 -- accel/accel.sh@42 -- # jq -r . 00:07:04.223 [2024-12-13 12:51:44.785860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.223 [2024-12-13 12:51:44.785941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70406 ] 00:07:04.223 [2024-12-13 12:51:44.907773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.223 [2024-12-13 12:51:44.955452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val= 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val= 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val=0x1 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val= 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val= 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val=0 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val= 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val=software 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val=32 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val=32 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val=1 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val=Yes 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val= 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:04.481 12:51:45 -- accel/accel.sh@21 -- # val= 00:07:04.481 12:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # IFS=: 00:07:04.481 12:51:45 -- accel/accel.sh@20 -- # read -r var val 00:07:05.415 12:51:46 -- accel/accel.sh@21 -- # val= 00:07:05.415 12:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # IFS=: 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # read -r var val 00:07:05.415 12:51:46 -- accel/accel.sh@21 -- # val= 00:07:05.415 12:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # IFS=: 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # read -r var val 00:07:05.415 12:51:46 -- accel/accel.sh@21 -- # val= 00:07:05.415 12:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # IFS=: 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # read -r var val 00:07:05.415 12:51:46 -- accel/accel.sh@21 -- # val= 00:07:05.415 12:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # IFS=: 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # read -r var val 00:07:05.415 12:51:46 -- accel/accel.sh@21 -- # val= 00:07:05.415 12:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # IFS=: 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # read -r var val 00:07:05.415 12:51:46 -- accel/accel.sh@21 -- # val= 00:07:05.415 12:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # IFS=: 00:07:05.415 12:51:46 -- accel/accel.sh@20 -- # read -r var val 00:07:05.415 12:51:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.415 12:51:46 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:05.415 12:51:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.415 00:07:05.415 real 0m2.764s 00:07:05.415 user 0m2.380s 00:07:05.415 sys 0m0.185s 00:07:05.415 12:51:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.415 ************************************ 00:07:05.415 END TEST accel_copy_crc32c_C2 00:07:05.415 ************************************ 00:07:05.415 12:51:46 -- common/autotest_common.sh@10 -- # set +x 00:07:05.415 12:51:46 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:05.415 12:51:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:05.415 12:51:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.415 12:51:46 -- common/autotest_common.sh@10 -- # set +x 00:07:05.415 ************************************ 00:07:05.415 START TEST accel_dualcast 00:07:05.415 ************************************ 00:07:05.674 12:51:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:05.674 12:51:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.674 12:51:46 -- accel/accel.sh@17 -- # local accel_module 00:07:05.674 12:51:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:05.674 12:51:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.674 12:51:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.674 12:51:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.674 12:51:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.674 12:51:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.674 12:51:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.674 12:51:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.674 12:51:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.674 12:51:46 -- accel/accel.sh@42 -- # jq -r . 00:07:05.674 [2024-12-13 12:51:46.216551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.674 [2024-12-13 12:51:46.217256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70441 ] 00:07:05.674 [2024-12-13 12:51:46.358941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.674 [2024-12-13 12:51:46.409368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.051 12:51:47 -- accel/accel.sh@18 -- # out=' 00:07:07.051 SPDK Configuration: 00:07:07.051 Core mask: 0x1 00:07:07.051 00:07:07.051 Accel Perf Configuration: 00:07:07.051 Workload Type: dualcast 00:07:07.051 Transfer size: 4096 bytes 00:07:07.051 Vector count 1 00:07:07.051 Module: software 00:07:07.051 Queue depth: 32 00:07:07.051 Allocate depth: 32 00:07:07.051 # threads/core: 1 00:07:07.051 Run time: 1 seconds 00:07:07.051 Verify: Yes 00:07:07.051 00:07:07.051 Running for 1 seconds... 00:07:07.051 00:07:07.051 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.051 ------------------------------------------------------------------------------------ 00:07:07.051 0,0 426720/s 1666 MiB/s 0 0 00:07:07.051 ==================================================================================== 00:07:07.051 Total 426720/s 1666 MiB/s 0 0' 00:07:07.051 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.051 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.051 12:51:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:07.051 12:51:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:07.051 12:51:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.051 12:51:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.051 12:51:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.051 12:51:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.051 12:51:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.051 12:51:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.051 12:51:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.051 12:51:47 -- accel/accel.sh@42 -- # jq -r . 00:07:07.051 [2024-12-13 12:51:47.612179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.051 [2024-12-13 12:51:47.612267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70455 ] 00:07:07.051 [2024-12-13 12:51:47.747145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.051 [2024-12-13 12:51:47.794600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val= 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val= 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val=0x1 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val= 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val= 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val=dualcast 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val= 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val=software 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val=32 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val=32 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val=1 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val=Yes 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val= 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.310 12:51:47 -- accel/accel.sh@21 -- # val= 00:07:07.310 12:51:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.310 12:51:47 -- accel/accel.sh@20 -- # read -r var val 00:07:08.245 12:51:48 -- accel/accel.sh@21 -- # val= 00:07:08.245 12:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # IFS=: 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # read -r var val 00:07:08.245 12:51:48 -- accel/accel.sh@21 -- # val= 00:07:08.245 12:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # IFS=: 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # read -r var val 00:07:08.245 12:51:48 -- accel/accel.sh@21 -- # val= 00:07:08.245 12:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # IFS=: 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # read -r var val 00:07:08.245 12:51:48 -- accel/accel.sh@21 -- # val= 00:07:08.245 12:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # IFS=: 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # read -r var val 00:07:08.245 12:51:48 -- accel/accel.sh@21 -- # val= 00:07:08.245 12:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # IFS=: 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # read -r var val 00:07:08.245 12:51:48 -- accel/accel.sh@21 -- # val= 00:07:08.245 12:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # IFS=: 00:07:08.245 12:51:48 -- accel/accel.sh@20 -- # read -r var val 00:07:08.245 12:51:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.245 12:51:48 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:08.245 12:51:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.245 00:07:08.245 real 0m2.792s 00:07:08.245 user 0m2.375s 00:07:08.245 sys 0m0.215s 00:07:08.245 12:51:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.245 12:51:48 -- common/autotest_common.sh@10 -- # set +x 00:07:08.245 ************************************ 00:07:08.245 END TEST accel_dualcast 00:07:08.245 ************************************ 00:07:08.504 12:51:49 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:08.504 12:51:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:08.504 12:51:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.504 12:51:49 -- common/autotest_common.sh@10 -- # set +x 00:07:08.504 ************************************ 00:07:08.504 START TEST accel_compare 00:07:08.504 ************************************ 00:07:08.504 12:51:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:08.504 12:51:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.504 12:51:49 -- accel/accel.sh@17 -- # local accel_module 00:07:08.504 12:51:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:08.504 12:51:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:08.504 12:51:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.504 12:51:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.504 12:51:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.504 12:51:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.504 12:51:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.504 12:51:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.504 12:51:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.504 12:51:49 -- accel/accel.sh@42 -- # jq -r . 00:07:08.504 [2024-12-13 12:51:49.056978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.504 [2024-12-13 12:51:49.057499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70489 ] 00:07:08.504 [2024-12-13 12:51:49.178528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.504 [2024-12-13 12:51:49.238015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.881 12:51:50 -- accel/accel.sh@18 -- # out=' 00:07:09.881 SPDK Configuration: 00:07:09.881 Core mask: 0x1 00:07:09.881 00:07:09.881 Accel Perf Configuration: 00:07:09.881 Workload Type: compare 00:07:09.881 Transfer size: 4096 bytes 00:07:09.881 Vector count 1 00:07:09.881 Module: software 00:07:09.881 Queue depth: 32 00:07:09.881 Allocate depth: 32 00:07:09.881 # threads/core: 1 00:07:09.881 Run time: 1 seconds 00:07:09.881 Verify: Yes 00:07:09.881 00:07:09.881 Running for 1 seconds... 00:07:09.881 00:07:09.881 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.881 ------------------------------------------------------------------------------------ 00:07:09.881 0,0 554240/s 2165 MiB/s 0 0 00:07:09.881 ==================================================================================== 00:07:09.881 Total 554240/s 2165 MiB/s 0 0' 00:07:09.881 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:09.881 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:09.881 12:51:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:09.881 12:51:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:09.881 12:51:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.881 12:51:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.881 12:51:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.881 12:51:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.881 12:51:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.881 12:51:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.881 12:51:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.881 12:51:50 -- accel/accel.sh@42 -- # jq -r . 00:07:09.881 [2024-12-13 12:51:50.442449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.881 [2024-12-13 12:51:50.442551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70511 ] 00:07:09.881 [2024-12-13 12:51:50.568722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.881 [2024-12-13 12:51:50.615611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val= 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val= 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val=0x1 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val= 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val= 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val=compare 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val= 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val=software 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val=32 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val=32 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val=1 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val=Yes 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val= 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.140 12:51:50 -- accel/accel.sh@21 -- # val= 00:07:10.140 12:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.140 12:51:50 -- accel/accel.sh@20 -- # read -r var val 00:07:11.077 12:51:51 -- accel/accel.sh@21 -- # val= 00:07:11.077 12:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.077 12:51:51 -- accel/accel.sh@21 -- # val= 00:07:11.077 12:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.077 12:51:51 -- accel/accel.sh@21 -- # val= 00:07:11.077 12:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.077 12:51:51 -- accel/accel.sh@21 -- # val= 00:07:11.077 12:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.077 12:51:51 -- accel/accel.sh@21 -- # val= 00:07:11.077 12:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.077 12:51:51 -- accel/accel.sh@21 -- # val= 00:07:11.077 12:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.077 ************************************ 00:07:11.077 END TEST accel_compare 00:07:11.077 ************************************ 00:07:11.077 12:51:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.077 12:51:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.077 12:51:51 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:11.077 12:51:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.077 00:07:11.077 real 0m2.762s 00:07:11.077 user 0m2.352s 00:07:11.077 sys 0m0.206s 00:07:11.077 12:51:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.077 12:51:51 -- common/autotest_common.sh@10 -- # set +x 00:07:11.077 12:51:51 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:11.077 12:51:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:11.077 12:51:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.077 12:51:51 -- common/autotest_common.sh@10 -- # set +x 00:07:11.077 ************************************ 00:07:11.077 START TEST accel_xor 00:07:11.077 ************************************ 00:07:11.077 12:51:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:11.077 12:51:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.077 12:51:51 -- accel/accel.sh@17 -- # local accel_module 00:07:11.337 12:51:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:11.337 12:51:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:11.337 12:51:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.337 12:51:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.337 12:51:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.337 12:51:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.337 12:51:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.337 12:51:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.337 12:51:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.337 12:51:51 -- accel/accel.sh@42 -- # jq -r . 00:07:11.337 [2024-12-13 12:51:51.877310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.337 [2024-12-13 12:51:51.878055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70544 ] 00:07:11.337 [2024-12-13 12:51:52.004534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.337 [2024-12-13 12:51:52.052904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.714 12:51:53 -- accel/accel.sh@18 -- # out=' 00:07:12.714 SPDK Configuration: 00:07:12.714 Core mask: 0x1 00:07:12.714 00:07:12.714 Accel Perf Configuration: 00:07:12.714 Workload Type: xor 00:07:12.714 Source buffers: 2 00:07:12.714 Transfer size: 4096 bytes 00:07:12.714 Vector count 1 00:07:12.714 Module: software 00:07:12.714 Queue depth: 32 00:07:12.714 Allocate depth: 32 00:07:12.714 # threads/core: 1 00:07:12.714 Run time: 1 seconds 00:07:12.714 Verify: Yes 00:07:12.714 00:07:12.714 Running for 1 seconds... 00:07:12.714 00:07:12.714 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.714 ------------------------------------------------------------------------------------ 00:07:12.714 0,0 298272/s 1165 MiB/s 0 0 00:07:12.714 ==================================================================================== 00:07:12.714 Total 298272/s 1165 MiB/s 0 0' 00:07:12.714 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.714 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.714 12:51:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:12.714 12:51:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:12.714 12:51:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.714 12:51:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.714 12:51:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.714 12:51:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.714 12:51:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.714 12:51:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.714 12:51:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.714 12:51:53 -- accel/accel.sh@42 -- # jq -r . 00:07:12.714 [2024-12-13 12:51:53.258671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.714 [2024-12-13 12:51:53.258787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70560 ] 00:07:12.714 [2024-12-13 12:51:53.395178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.714 [2024-12-13 12:51:53.444802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val= 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val= 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val=0x1 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val= 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val= 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val=xor 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val=2 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val= 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val=software 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.973 12:51:53 -- accel/accel.sh@21 -- # val=32 00:07:12.973 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.973 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.974 12:51:53 -- accel/accel.sh@21 -- # val=32 00:07:12.974 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.974 12:51:53 -- accel/accel.sh@21 -- # val=1 00:07:12.974 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.974 12:51:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.974 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.974 12:51:53 -- accel/accel.sh@21 -- # val=Yes 00:07:12.974 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.974 12:51:53 -- accel/accel.sh@21 -- # val= 00:07:12.974 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.974 12:51:53 -- accel/accel.sh@21 -- # val= 00:07:12.974 12:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # IFS=: 00:07:12.974 12:51:53 -- accel/accel.sh@20 -- # read -r var val 00:07:13.910 12:51:54 -- accel/accel.sh@21 -- # val= 00:07:13.910 12:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # IFS=: 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # read -r var val 00:07:13.910 12:51:54 -- accel/accel.sh@21 -- # val= 00:07:13.910 12:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # IFS=: 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # read -r var val 00:07:13.910 12:51:54 -- accel/accel.sh@21 -- # val= 00:07:13.910 12:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # IFS=: 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # read -r var val 00:07:13.910 12:51:54 -- accel/accel.sh@21 -- # val= 00:07:13.910 12:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # IFS=: 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # read -r var val 00:07:13.910 12:51:54 -- accel/accel.sh@21 -- # val= 00:07:13.910 12:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # IFS=: 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # read -r var val 00:07:13.910 12:51:54 -- accel/accel.sh@21 -- # val= 00:07:13.910 12:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # IFS=: 00:07:13.910 12:51:54 -- accel/accel.sh@20 -- # read -r var val 00:07:13.910 12:51:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.910 12:51:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:13.910 12:51:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.910 00:07:13.910 real 0m2.774s 00:07:13.910 user 0m2.367s 00:07:13.910 sys 0m0.206s 00:07:13.910 12:51:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.910 ************************************ 00:07:13.910 END TEST accel_xor 00:07:13.910 ************************************ 00:07:13.910 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.910 12:51:54 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:13.910 12:51:54 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:13.910 12:51:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.910 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.910 ************************************ 00:07:13.910 START TEST accel_xor 00:07:13.910 ************************************ 00:07:13.910 12:51:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:13.910 12:51:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.910 12:51:54 -- accel/accel.sh@17 -- # local accel_module 00:07:13.910 12:51:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:13.910 12:51:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:13.910 12:51:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.910 12:51:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.910 12:51:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.910 12:51:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.910 12:51:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.910 12:51:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.910 12:51:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.910 12:51:54 -- accel/accel.sh@42 -- # jq -r . 00:07:14.180 [2024-12-13 12:51:54.697763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:14.180 [2024-12-13 12:51:54.697856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70595 ] 00:07:14.180 [2024-12-13 12:51:54.834635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.180 [2024-12-13 12:51:54.884592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.575 12:51:56 -- accel/accel.sh@18 -- # out=' 00:07:15.575 SPDK Configuration: 00:07:15.575 Core mask: 0x1 00:07:15.575 00:07:15.575 Accel Perf Configuration: 00:07:15.575 Workload Type: xor 00:07:15.575 Source buffers: 3 00:07:15.575 Transfer size: 4096 bytes 00:07:15.575 Vector count 1 00:07:15.576 Module: software 00:07:15.576 Queue depth: 32 00:07:15.576 Allocate depth: 32 00:07:15.576 # threads/core: 1 00:07:15.576 Run time: 1 seconds 00:07:15.576 Verify: Yes 00:07:15.576 00:07:15.576 Running for 1 seconds... 00:07:15.576 00:07:15.576 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.576 ------------------------------------------------------------------------------------ 00:07:15.576 0,0 283072/s 1105 MiB/s 0 0 00:07:15.576 ==================================================================================== 00:07:15.576 Total 283072/s 1105 MiB/s 0 0' 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.576 12:51:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:15.576 12:51:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.576 12:51:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.576 12:51:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.576 12:51:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.576 12:51:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.576 12:51:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.576 12:51:56 -- accel/accel.sh@42 -- # jq -r . 00:07:15.576 [2024-12-13 12:51:56.090015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.576 [2024-12-13 12:51:56.090108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70613 ] 00:07:15.576 [2024-12-13 12:51:56.224543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.576 [2024-12-13 12:51:56.271302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val= 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val= 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val=0x1 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val= 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val= 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val=xor 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val=3 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val= 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val=software 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val=32 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val=32 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val=1 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val=Yes 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val= 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:15.576 12:51:56 -- accel/accel.sh@21 -- # val= 00:07:15.576 12:51:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # IFS=: 00:07:15.576 12:51:56 -- accel/accel.sh@20 -- # read -r var val 00:07:16.953 12:51:57 -- accel/accel.sh@21 -- # val= 00:07:16.953 12:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # IFS=: 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # read -r var val 00:07:16.953 12:51:57 -- accel/accel.sh@21 -- # val= 00:07:16.953 12:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # IFS=: 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # read -r var val 00:07:16.953 12:51:57 -- accel/accel.sh@21 -- # val= 00:07:16.953 12:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # IFS=: 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # read -r var val 00:07:16.953 12:51:57 -- accel/accel.sh@21 -- # val= 00:07:16.953 12:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # IFS=: 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # read -r var val 00:07:16.953 12:51:57 -- accel/accel.sh@21 -- # val= 00:07:16.953 12:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # IFS=: 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # read -r var val 00:07:16.953 12:51:57 -- accel/accel.sh@21 -- # val= 00:07:16.953 12:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # IFS=: 00:07:16.953 12:51:57 -- accel/accel.sh@20 -- # read -r var val 00:07:16.953 12:51:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.953 12:51:57 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:16.953 12:51:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.953 00:07:16.953 real 0m2.781s 00:07:16.953 user 0m2.376s 00:07:16.953 sys 0m0.206s 00:07:16.953 ************************************ 00:07:16.953 END TEST accel_xor 00:07:16.953 ************************************ 00:07:16.953 12:51:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.953 12:51:57 -- common/autotest_common.sh@10 -- # set +x 00:07:16.953 12:51:57 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:16.953 12:51:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:16.953 12:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.953 12:51:57 -- common/autotest_common.sh@10 -- # set +x 00:07:16.953 ************************************ 00:07:16.953 START TEST accel_dif_verify 00:07:16.953 ************************************ 00:07:16.953 12:51:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:16.953 12:51:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.953 12:51:57 -- accel/accel.sh@17 -- # local accel_module 00:07:16.953 12:51:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:16.953 12:51:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.953 12:51:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:16.953 12:51:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.953 12:51:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.953 12:51:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.953 12:51:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.953 12:51:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.953 12:51:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.953 12:51:57 -- accel/accel.sh@42 -- # jq -r . 00:07:16.953 [2024-12-13 12:51:57.534775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.953 [2024-12-13 12:51:57.535585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70649 ] 00:07:16.953 [2024-12-13 12:51:57.670994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.953 [2024-12-13 12:51:57.719495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.331 12:51:58 -- accel/accel.sh@18 -- # out=' 00:07:18.331 SPDK Configuration: 00:07:18.331 Core mask: 0x1 00:07:18.331 00:07:18.331 Accel Perf Configuration: 00:07:18.331 Workload Type: dif_verify 00:07:18.331 Vector size: 4096 bytes 00:07:18.331 Transfer size: 4096 bytes 00:07:18.331 Block size: 512 bytes 00:07:18.331 Metadata size: 8 bytes 00:07:18.331 Vector count 1 00:07:18.331 Module: software 00:07:18.331 Queue depth: 32 00:07:18.331 Allocate depth: 32 00:07:18.331 # threads/core: 1 00:07:18.331 Run time: 1 seconds 00:07:18.331 Verify: No 00:07:18.331 00:07:18.331 Running for 1 seconds... 00:07:18.331 00:07:18.331 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.331 ------------------------------------------------------------------------------------ 00:07:18.331 0,0 123808/s 491 MiB/s 0 0 00:07:18.331 ==================================================================================== 00:07:18.331 Total 123808/s 483 MiB/s 0 0' 00:07:18.331 12:51:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.331 12:51:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:18.331 12:51:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.331 12:51:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:18.331 12:51:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.331 12:51:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.331 12:51:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.331 12:51:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.331 12:51:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.331 12:51:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.331 12:51:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.331 12:51:58 -- accel/accel.sh@42 -- # jq -r . 00:07:18.331 [2024-12-13 12:51:58.911347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.331 [2024-12-13 12:51:58.911434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70663 ] 00:07:18.331 [2024-12-13 12:51:59.033608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.331 [2024-12-13 12:51:59.080631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.590 12:51:59 -- accel/accel.sh@21 -- # val= 00:07:18.590 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.590 12:51:59 -- accel/accel.sh@21 -- # val= 00:07:18.590 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.590 12:51:59 -- accel/accel.sh@21 -- # val=0x1 00:07:18.590 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.590 12:51:59 -- accel/accel.sh@21 -- # val= 00:07:18.590 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.590 12:51:59 -- accel/accel.sh@21 -- # val= 00:07:18.590 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.590 12:51:59 -- accel/accel.sh@21 -- # val=dif_verify 00:07:18.590 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.590 12:51:59 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.590 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.590 12:51:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.590 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val= 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val=software 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val=32 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val=32 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val=1 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val=No 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val= 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:18.591 12:51:59 -- accel/accel.sh@21 -- # val= 00:07:18.591 12:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # IFS=: 00:07:18.591 12:51:59 -- accel/accel.sh@20 -- # read -r var val 00:07:19.526 12:52:00 -- accel/accel.sh@21 -- # val= 00:07:19.526 12:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.526 12:52:00 -- accel/accel.sh@20 -- # IFS=: 00:07:19.526 12:52:00 -- accel/accel.sh@20 -- # read -r var val 00:07:19.526 12:52:00 -- accel/accel.sh@21 -- # val= 00:07:19.526 12:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # IFS=: 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # read -r var val 00:07:19.527 12:52:00 -- accel/accel.sh@21 -- # val= 00:07:19.527 12:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # IFS=: 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # read -r var val 00:07:19.527 12:52:00 -- accel/accel.sh@21 -- # val= 00:07:19.527 12:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # IFS=: 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # read -r var val 00:07:19.527 12:52:00 -- accel/accel.sh@21 -- # val= 00:07:19.527 12:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # IFS=: 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # read -r var val 00:07:19.527 12:52:00 -- accel/accel.sh@21 -- # val= 00:07:19.527 12:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # IFS=: 00:07:19.527 12:52:00 -- accel/accel.sh@20 -- # read -r var val 00:07:19.527 12:52:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.527 ************************************ 00:07:19.527 END TEST accel_dif_verify 00:07:19.527 ************************************ 00:07:19.527 12:52:00 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:19.527 12:52:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.527 00:07:19.527 real 0m2.755s 00:07:19.527 user 0m2.354s 00:07:19.527 sys 0m0.202s 00:07:19.527 12:52:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.527 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:19.527 12:52:00 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:19.527 12:52:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:19.786 12:52:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.786 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:19.786 ************************************ 00:07:19.786 START TEST accel_dif_generate 00:07:19.786 ************************************ 00:07:19.786 12:52:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:19.786 12:52:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.786 12:52:00 -- accel/accel.sh@17 -- # local accel_module 00:07:19.786 12:52:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:19.786 12:52:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.786 12:52:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:19.786 12:52:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.786 12:52:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.786 12:52:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.786 12:52:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.786 12:52:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.786 12:52:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.786 12:52:00 -- accel/accel.sh@42 -- # jq -r . 00:07:19.786 [2024-12-13 12:52:00.337038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.786 [2024-12-13 12:52:00.337303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70697 ] 00:07:19.786 [2024-12-13 12:52:00.476925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.786 [2024-12-13 12:52:00.525936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.163 12:52:01 -- accel/accel.sh@18 -- # out=' 00:07:21.163 SPDK Configuration: 00:07:21.163 Core mask: 0x1 00:07:21.163 00:07:21.163 Accel Perf Configuration: 00:07:21.163 Workload Type: dif_generate 00:07:21.163 Vector size: 4096 bytes 00:07:21.163 Transfer size: 4096 bytes 00:07:21.163 Block size: 512 bytes 00:07:21.163 Metadata size: 8 bytes 00:07:21.163 Vector count 1 00:07:21.163 Module: software 00:07:21.163 Queue depth: 32 00:07:21.163 Allocate depth: 32 00:07:21.163 # threads/core: 1 00:07:21.163 Run time: 1 seconds 00:07:21.163 Verify: No 00:07:21.163 00:07:21.163 Running for 1 seconds... 00:07:21.163 00:07:21.163 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.163 ------------------------------------------------------------------------------------ 00:07:21.163 0,0 150240/s 596 MiB/s 0 0 00:07:21.163 ==================================================================================== 00:07:21.163 Total 150240/s 586 MiB/s 0 0' 00:07:21.163 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.163 12:52:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:21.163 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.163 12:52:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:21.163 12:52:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.163 12:52:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.163 12:52:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.163 12:52:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.163 12:52:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.163 12:52:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.163 12:52:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.163 12:52:01 -- accel/accel.sh@42 -- # jq -r . 00:07:21.163 [2024-12-13 12:52:01.725312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.163 [2024-12-13 12:52:01.725554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70717 ] 00:07:21.163 [2024-12-13 12:52:01.858223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.163 [2024-12-13 12:52:01.904986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val= 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val= 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val=0x1 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val= 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val= 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val=dif_generate 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val= 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val=software 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val=32 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val=32 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val=1 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val=No 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 12:52:01 -- accel/accel.sh@21 -- # val= 00:07:21.422 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.423 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.423 12:52:01 -- accel/accel.sh@21 -- # val= 00:07:21.423 12:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.423 12:52:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.423 12:52:01 -- accel/accel.sh@20 -- # read -r var val 00:07:22.358 12:52:03 -- accel/accel.sh@21 -- # val= 00:07:22.358 12:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # IFS=: 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # read -r var val 00:07:22.358 12:52:03 -- accel/accel.sh@21 -- # val= 00:07:22.358 12:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # IFS=: 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # read -r var val 00:07:22.358 12:52:03 -- accel/accel.sh@21 -- # val= 00:07:22.358 12:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # IFS=: 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # read -r var val 00:07:22.358 12:52:03 -- accel/accel.sh@21 -- # val= 00:07:22.358 12:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # IFS=: 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # read -r var val 00:07:22.358 12:52:03 -- accel/accel.sh@21 -- # val= 00:07:22.358 12:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # IFS=: 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # read -r var val 00:07:22.358 12:52:03 -- accel/accel.sh@21 -- # val= 00:07:22.358 12:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # IFS=: 00:07:22.358 12:52:03 -- accel/accel.sh@20 -- # read -r var val 00:07:22.358 12:52:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.358 12:52:03 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:22.358 ************************************ 00:07:22.358 END TEST accel_dif_generate 00:07:22.358 ************************************ 00:07:22.358 12:52:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.358 00:07:22.358 real 0m2.771s 00:07:22.358 user 0m2.370s 00:07:22.358 sys 0m0.205s 00:07:22.358 12:52:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.358 12:52:03 -- common/autotest_common.sh@10 -- # set +x 00:07:22.358 12:52:03 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:22.358 12:52:03 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:22.358 12:52:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.358 12:52:03 -- common/autotest_common.sh@10 -- # set +x 00:07:22.616 ************************************ 00:07:22.616 START TEST accel_dif_generate_copy 00:07:22.616 ************************************ 00:07:22.616 12:52:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:22.616 12:52:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.616 12:52:03 -- accel/accel.sh@17 -- # local accel_module 00:07:22.616 12:52:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:22.616 12:52:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:22.616 12:52:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.616 12:52:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.616 12:52:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.616 12:52:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.617 12:52:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.617 12:52:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.617 12:52:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.617 12:52:03 -- accel/accel.sh@42 -- # jq -r . 00:07:22.617 [2024-12-13 12:52:03.161424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.617 [2024-12-13 12:52:03.161518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70746 ] 00:07:22.617 [2024-12-13 12:52:03.296336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.617 [2024-12-13 12:52:03.344204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.994 12:52:04 -- accel/accel.sh@18 -- # out=' 00:07:23.994 SPDK Configuration: 00:07:23.994 Core mask: 0x1 00:07:23.994 00:07:23.994 Accel Perf Configuration: 00:07:23.994 Workload Type: dif_generate_copy 00:07:23.994 Vector size: 4096 bytes 00:07:23.994 Transfer size: 4096 bytes 00:07:23.994 Vector count 1 00:07:23.994 Module: software 00:07:23.994 Queue depth: 32 00:07:23.994 Allocate depth: 32 00:07:23.994 # threads/core: 1 00:07:23.994 Run time: 1 seconds 00:07:23.994 Verify: No 00:07:23.994 00:07:23.994 Running for 1 seconds... 00:07:23.994 00:07:23.994 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.994 ------------------------------------------------------------------------------------ 00:07:23.994 0,0 115936/s 459 MiB/s 0 0 00:07:23.994 ==================================================================================== 00:07:23.994 Total 115936/s 452 MiB/s 0 0' 00:07:23.994 12:52:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:23.994 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:23.994 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:23.994 12:52:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:23.994 12:52:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.994 12:52:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.994 12:52:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.994 12:52:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.994 12:52:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.994 12:52:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.994 12:52:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.994 12:52:04 -- accel/accel.sh@42 -- # jq -r . 00:07:23.994 [2024-12-13 12:52:04.546188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.994 [2024-12-13 12:52:04.546278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70771 ] 00:07:23.994 [2024-12-13 12:52:04.675036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.994 [2024-12-13 12:52:04.722406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val= 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val= 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val=0x1 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val= 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val= 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val= 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val=software 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val=32 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val=32 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val=1 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val=No 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val= 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:24.253 12:52:04 -- accel/accel.sh@21 -- # val= 00:07:24.253 12:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # IFS=: 00:07:24.253 12:52:04 -- accel/accel.sh@20 -- # read -r var val 00:07:25.186 12:52:05 -- accel/accel.sh@21 -- # val= 00:07:25.186 12:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.186 12:52:05 -- accel/accel.sh@21 -- # val= 00:07:25.186 12:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.186 12:52:05 -- accel/accel.sh@21 -- # val= 00:07:25.186 12:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.186 12:52:05 -- accel/accel.sh@21 -- # val= 00:07:25.186 12:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.186 ************************************ 00:07:25.186 END TEST accel_dif_generate_copy 00:07:25.186 ************************************ 00:07:25.186 12:52:05 -- accel/accel.sh@21 -- # val= 00:07:25.186 12:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.186 12:52:05 -- accel/accel.sh@21 -- # val= 00:07:25.186 12:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.186 12:52:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.186 12:52:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.186 12:52:05 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:25.186 12:52:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.186 00:07:25.186 real 0m2.766s 00:07:25.186 user 0m2.362s 00:07:25.186 sys 0m0.201s 00:07:25.186 12:52:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.186 12:52:05 -- common/autotest_common.sh@10 -- # set +x 00:07:25.186 12:52:05 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:25.186 12:52:05 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.186 12:52:05 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:25.186 12:52:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.186 12:52:05 -- common/autotest_common.sh@10 -- # set +x 00:07:25.186 ************************************ 00:07:25.186 START TEST accel_comp 00:07:25.186 ************************************ 00:07:25.187 12:52:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.187 12:52:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.187 12:52:05 -- accel/accel.sh@17 -- # local accel_module 00:07:25.187 12:52:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.446 12:52:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.446 12:52:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.446 12:52:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.446 12:52:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.446 12:52:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.446 12:52:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.446 12:52:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.446 12:52:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.446 12:52:05 -- accel/accel.sh@42 -- # jq -r . 00:07:25.446 [2024-12-13 12:52:05.983046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.446 [2024-12-13 12:52:05.983137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70803 ] 00:07:25.446 [2024-12-13 12:52:06.118112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.446 [2024-12-13 12:52:06.166718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.867 12:52:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.867 00:07:26.867 SPDK Configuration: 00:07:26.867 Core mask: 0x1 00:07:26.867 00:07:26.867 Accel Perf Configuration: 00:07:26.867 Workload Type: compress 00:07:26.867 Transfer size: 4096 bytes 00:07:26.867 Vector count 1 00:07:26.867 Module: software 00:07:26.867 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.867 Queue depth: 32 00:07:26.867 Allocate depth: 32 00:07:26.867 # threads/core: 1 00:07:26.867 Run time: 1 seconds 00:07:26.867 Verify: No 00:07:26.867 00:07:26.867 Running for 1 seconds... 00:07:26.867 00:07:26.867 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.867 ------------------------------------------------------------------------------------ 00:07:26.867 0,0 59360/s 247 MiB/s 0 0 00:07:26.867 ==================================================================================== 00:07:26.867 Total 59360/s 231 MiB/s 0 0' 00:07:26.867 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:26.867 12:52:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.867 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:26.867 12:52:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.867 12:52:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.867 12:52:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.867 12:52:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.867 12:52:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.867 12:52:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.867 12:52:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.867 12:52:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.867 12:52:07 -- accel/accel.sh@42 -- # jq -r . 00:07:26.867 [2024-12-13 12:52:07.371437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:26.867 [2024-12-13 12:52:07.371704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:07:26.867 [2024-12-13 12:52:07.505444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.867 [2024-12-13 12:52:07.551966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val= 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val= 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val= 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val=0x1 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val= 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val= 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val=compress 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val= 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val=software 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val=32 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val=32 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val=1 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val=No 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val= 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:27.144 12:52:07 -- accel/accel.sh@21 -- # val= 00:07:27.144 12:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # IFS=: 00:07:27.144 12:52:07 -- accel/accel.sh@20 -- # read -r var val 00:07:28.087 12:52:08 -- accel/accel.sh@21 -- # val= 00:07:28.087 12:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # IFS=: 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # read -r var val 00:07:28.087 12:52:08 -- accel/accel.sh@21 -- # val= 00:07:28.087 12:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # IFS=: 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # read -r var val 00:07:28.087 12:52:08 -- accel/accel.sh@21 -- # val= 00:07:28.087 12:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # IFS=: 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # read -r var val 00:07:28.087 12:52:08 -- accel/accel.sh@21 -- # val= 00:07:28.087 12:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # IFS=: 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # read -r var val 00:07:28.087 12:52:08 -- accel/accel.sh@21 -- # val= 00:07:28.087 12:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # IFS=: 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # read -r var val 00:07:28.087 12:52:08 -- accel/accel.sh@21 -- # val= 00:07:28.087 12:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # IFS=: 00:07:28.087 12:52:08 -- accel/accel.sh@20 -- # read -r var val 00:07:28.087 12:52:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.087 12:52:08 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:28.087 12:52:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.087 00:07:28.087 real 0m2.794s 00:07:28.087 user 0m2.384s 00:07:28.087 sys 0m0.204s 00:07:28.087 12:52:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.087 ************************************ 00:07:28.087 END TEST accel_comp 00:07:28.087 ************************************ 00:07:28.087 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:07:28.087 12:52:08 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.087 12:52:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:28.087 12:52:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.087 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:07:28.087 ************************************ 00:07:28.087 START TEST accel_decomp 00:07:28.087 ************************************ 00:07:28.087 12:52:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.087 12:52:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.087 12:52:08 -- accel/accel.sh@17 -- # local accel_module 00:07:28.087 12:52:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.087 12:52:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.087 12:52:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.087 12:52:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.087 12:52:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.087 12:52:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.087 12:52:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.087 12:52:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.087 12:52:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.087 12:52:08 -- accel/accel.sh@42 -- # jq -r . 00:07:28.087 [2024-12-13 12:52:08.831240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.087 [2024-12-13 12:52:08.831469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70857 ] 00:07:28.346 [2024-12-13 12:52:08.966784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.346 [2024-12-13 12:52:09.016512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.724 12:52:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:29.724 00:07:29.724 SPDK Configuration: 00:07:29.724 Core mask: 0x1 00:07:29.724 00:07:29.724 Accel Perf Configuration: 00:07:29.724 Workload Type: decompress 00:07:29.724 Transfer size: 4096 bytes 00:07:29.724 Vector count 1 00:07:29.724 Module: software 00:07:29.724 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.724 Queue depth: 32 00:07:29.724 Allocate depth: 32 00:07:29.724 # threads/core: 1 00:07:29.724 Run time: 1 seconds 00:07:29.724 Verify: Yes 00:07:29.724 00:07:29.724 Running for 1 seconds... 00:07:29.724 00:07:29.724 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.724 ------------------------------------------------------------------------------------ 00:07:29.724 0,0 83232/s 153 MiB/s 0 0 00:07:29.724 ==================================================================================== 00:07:29.724 Total 83232/s 325 MiB/s 0 0' 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.724 12:52:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.724 12:52:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.724 12:52:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.724 12:52:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.724 12:52:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.724 12:52:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.724 12:52:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.724 12:52:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.724 12:52:10 -- accel/accel.sh@42 -- # jq -r . 00:07:29.724 [2024-12-13 12:52:10.221099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.724 [2024-12-13 12:52:10.221193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70871 ] 00:07:29.724 [2024-12-13 12:52:10.354724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.724 [2024-12-13 12:52:10.401657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val= 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val= 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val= 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val=0x1 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val= 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val= 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val=decompress 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val= 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val=software 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val=32 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val=32 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val=1 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val=Yes 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val= 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:29.724 12:52:10 -- accel/accel.sh@21 -- # val= 00:07:29.724 12:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # IFS=: 00:07:29.724 12:52:10 -- accel/accel.sh@20 -- # read -r var val 00:07:31.102 12:52:11 -- accel/accel.sh@21 -- # val= 00:07:31.102 12:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # IFS=: 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # read -r var val 00:07:31.102 12:52:11 -- accel/accel.sh@21 -- # val= 00:07:31.102 12:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # IFS=: 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # read -r var val 00:07:31.102 12:52:11 -- accel/accel.sh@21 -- # val= 00:07:31.102 12:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # IFS=: 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # read -r var val 00:07:31.102 12:52:11 -- accel/accel.sh@21 -- # val= 00:07:31.102 12:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # IFS=: 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # read -r var val 00:07:31.102 12:52:11 -- accel/accel.sh@21 -- # val= 00:07:31.102 12:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # IFS=: 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # read -r var val 00:07:31.102 12:52:11 -- accel/accel.sh@21 -- # val= 00:07:31.102 12:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # IFS=: 00:07:31.102 12:52:11 -- accel/accel.sh@20 -- # read -r var val 00:07:31.102 12:52:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.102 12:52:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.102 12:52:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.102 00:07:31.102 real 0m2.779s 00:07:31.102 user 0m2.368s 00:07:31.102 sys 0m0.212s 00:07:31.102 12:52:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.102 ************************************ 00:07:31.102 END TEST accel_decomp 00:07:31.102 ************************************ 00:07:31.102 12:52:11 -- common/autotest_common.sh@10 -- # set +x 00:07:31.102 12:52:11 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.102 12:52:11 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:31.102 12:52:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.102 12:52:11 -- common/autotest_common.sh@10 -- # set +x 00:07:31.102 ************************************ 00:07:31.102 START TEST accel_decmop_full 00:07:31.102 ************************************ 00:07:31.102 12:52:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.102 12:52:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.102 12:52:11 -- accel/accel.sh@17 -- # local accel_module 00:07:31.102 12:52:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.102 12:52:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.102 12:52:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.102 12:52:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.102 12:52:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.102 12:52:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.102 12:52:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.102 12:52:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.102 12:52:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.102 12:52:11 -- accel/accel.sh@42 -- # jq -r . 00:07:31.102 [2024-12-13 12:52:11.660459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.102 [2024-12-13 12:52:11.661073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70900 ] 00:07:31.102 [2024-12-13 12:52:11.795036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.102 [2024-12-13 12:52:11.851089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.479 12:52:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.479 00:07:32.479 SPDK Configuration: 00:07:32.479 Core mask: 0x1 00:07:32.479 00:07:32.479 Accel Perf Configuration: 00:07:32.479 Workload Type: decompress 00:07:32.479 Transfer size: 111250 bytes 00:07:32.479 Vector count 1 00:07:32.479 Module: software 00:07:32.479 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.479 Queue depth: 32 00:07:32.479 Allocate depth: 32 00:07:32.479 # threads/core: 1 00:07:32.479 Run time: 1 seconds 00:07:32.479 Verify: Yes 00:07:32.479 00:07:32.479 Running for 1 seconds... 00:07:32.479 00:07:32.479 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.479 ------------------------------------------------------------------------------------ 00:07:32.479 0,0 5568/s 230 MiB/s 0 0 00:07:32.479 ==================================================================================== 00:07:32.479 Total 5568/s 590 MiB/s 0 0' 00:07:32.479 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.479 12:52:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:32.479 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.479 12:52:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:32.479 12:52:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.479 12:52:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.479 12:52:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.479 12:52:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.479 12:52:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.479 12:52:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.479 12:52:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.479 12:52:13 -- accel/accel.sh@42 -- # jq -r . 00:07:32.479 [2024-12-13 12:52:13.066472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.479 [2024-12-13 12:52:13.066567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70925 ] 00:07:32.479 [2024-12-13 12:52:13.201262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.479 [2024-12-13 12:52:13.249216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val= 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val= 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val= 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val=0x1 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val= 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val= 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val=decompress 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val= 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val=software 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val=32 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val=32 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val=1 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val=Yes 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val= 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:32.741 12:52:13 -- accel/accel.sh@21 -- # val= 00:07:32.741 12:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # IFS=: 00:07:32.741 12:52:13 -- accel/accel.sh@20 -- # read -r var val 00:07:33.678 12:52:14 -- accel/accel.sh@21 -- # val= 00:07:33.678 12:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # IFS=: 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # read -r var val 00:07:33.678 12:52:14 -- accel/accel.sh@21 -- # val= 00:07:33.678 12:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # IFS=: 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # read -r var val 00:07:33.678 12:52:14 -- accel/accel.sh@21 -- # val= 00:07:33.678 12:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # IFS=: 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # read -r var val 00:07:33.678 12:52:14 -- accel/accel.sh@21 -- # val= 00:07:33.678 12:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # IFS=: 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # read -r var val 00:07:33.678 12:52:14 -- accel/accel.sh@21 -- # val= 00:07:33.678 ************************************ 00:07:33.678 END TEST accel_decmop_full 00:07:33.678 ************************************ 00:07:33.678 12:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # IFS=: 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # read -r var val 00:07:33.678 12:52:14 -- accel/accel.sh@21 -- # val= 00:07:33.678 12:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # IFS=: 00:07:33.678 12:52:14 -- accel/accel.sh@20 -- # read -r var val 00:07:33.678 12:52:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.678 12:52:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:33.678 12:52:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.678 00:07:33.678 real 0m2.805s 00:07:33.678 user 0m2.398s 00:07:33.678 sys 0m0.207s 00:07:33.678 12:52:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.678 12:52:14 -- common/autotest_common.sh@10 -- # set +x 00:07:33.937 12:52:14 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.937 12:52:14 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:33.937 12:52:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.937 12:52:14 -- common/autotest_common.sh@10 -- # set +x 00:07:33.937 ************************************ 00:07:33.937 START TEST accel_decomp_mcore 00:07:33.937 ************************************ 00:07:33.937 12:52:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.937 12:52:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.937 12:52:14 -- accel/accel.sh@17 -- # local accel_module 00:07:33.937 12:52:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.937 12:52:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.937 12:52:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.937 12:52:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.937 12:52:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.937 12:52:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.937 12:52:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.937 12:52:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.937 12:52:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.937 12:52:14 -- accel/accel.sh@42 -- # jq -r . 00:07:33.937 [2024-12-13 12:52:14.520059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.937 [2024-12-13 12:52:14.520156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70954 ] 00:07:33.937 [2024-12-13 12:52:14.653801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.937 [2024-12-13 12:52:14.704186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.937 [2024-12-13 12:52:14.704324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.937 [2024-12-13 12:52:14.704447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.937 [2024-12-13 12:52:14.704738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.314 12:52:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.314 00:07:35.314 SPDK Configuration: 00:07:35.314 Core mask: 0xf 00:07:35.314 00:07:35.314 Accel Perf Configuration: 00:07:35.314 Workload Type: decompress 00:07:35.314 Transfer size: 4096 bytes 00:07:35.314 Vector count 1 00:07:35.314 Module: software 00:07:35.314 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.314 Queue depth: 32 00:07:35.314 Allocate depth: 32 00:07:35.314 # threads/core: 1 00:07:35.314 Run time: 1 seconds 00:07:35.314 Verify: Yes 00:07:35.314 00:07:35.314 Running for 1 seconds... 00:07:35.314 00:07:35.314 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.314 ------------------------------------------------------------------------------------ 00:07:35.314 0,0 67168/s 123 MiB/s 0 0 00:07:35.314 3,0 63456/s 116 MiB/s 0 0 00:07:35.314 2,0 65088/s 119 MiB/s 0 0 00:07:35.314 1,0 65216/s 120 MiB/s 0 0 00:07:35.314 ==================================================================================== 00:07:35.314 Total 260928/s 1019 MiB/s 0 0' 00:07:35.314 12:52:15 -- accel/accel.sh@20 -- # IFS=: 00:07:35.314 12:52:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.314 12:52:15 -- accel/accel.sh@20 -- # read -r var val 00:07:35.314 12:52:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.314 12:52:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.314 12:52:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.314 12:52:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.314 12:52:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.314 12:52:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.314 12:52:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.314 12:52:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.314 12:52:15 -- accel/accel.sh@42 -- # jq -r . 00:07:35.314 [2024-12-13 12:52:15.925072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.314 [2024-12-13 12:52:15.925326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70977 ] 00:07:35.314 [2024-12-13 12:52:16.061599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.573 [2024-12-13 12:52:16.120467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.573 [2024-12-13 12:52:16.120599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.573 [2024-12-13 12:52:16.120710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.573 [2024-12-13 12:52:16.120714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val= 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val= 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val= 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val=0xf 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val= 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val= 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val=decompress 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val= 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val=software 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val=32 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val=32 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val=1 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val=Yes 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val= 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:35.573 12:52:16 -- accel/accel.sh@21 -- # val= 00:07:35.573 12:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # IFS=: 00:07:35.573 12:52:16 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@21 -- # val= 00:07:36.949 12:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # IFS=: 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@21 -- # val= 00:07:36.949 12:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # IFS=: 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@21 -- # val= 00:07:36.949 12:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # IFS=: 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@21 -- # val= 00:07:36.949 12:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # IFS=: 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@21 -- # val= 00:07:36.949 12:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # IFS=: 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@21 -- # val= 00:07:36.949 12:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # IFS=: 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@21 -- # val= 00:07:36.949 12:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # IFS=: 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@21 -- # val= 00:07:36.949 12:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # IFS=: 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@21 -- # val= 00:07:36.949 12:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # IFS=: 00:07:36.949 12:52:17 -- accel/accel.sh@20 -- # read -r var val 00:07:36.949 12:52:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.949 12:52:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:36.949 12:52:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.949 00:07:36.949 real 0m2.824s 00:07:36.949 user 0m9.153s 00:07:36.949 sys 0m0.248s 00:07:36.949 12:52:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.949 12:52:17 -- common/autotest_common.sh@10 -- # set +x 00:07:36.949 ************************************ 00:07:36.949 END TEST accel_decomp_mcore 00:07:36.949 ************************************ 00:07:36.949 12:52:17 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.949 12:52:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:36.949 12:52:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.949 12:52:17 -- common/autotest_common.sh@10 -- # set +x 00:07:36.949 ************************************ 00:07:36.949 START TEST accel_decomp_full_mcore 00:07:36.949 ************************************ 00:07:36.949 12:52:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.949 12:52:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.949 12:52:17 -- accel/accel.sh@17 -- # local accel_module 00:07:36.949 12:52:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.949 12:52:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.949 12:52:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.949 12:52:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.949 12:52:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.949 12:52:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.949 12:52:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.949 12:52:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.949 12:52:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.949 12:52:17 -- accel/accel.sh@42 -- # jq -r . 00:07:36.949 [2024-12-13 12:52:17.391244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.949 [2024-12-13 12:52:17.391373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71014 ] 00:07:36.949 [2024-12-13 12:52:17.523425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.949 [2024-12-13 12:52:17.574400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.949 [2024-12-13 12:52:17.574487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.949 [2024-12-13 12:52:17.574581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.949 [2024-12-13 12:52:17.574582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.325 12:52:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.325 00:07:38.325 SPDK Configuration: 00:07:38.325 Core mask: 0xf 00:07:38.325 00:07:38.325 Accel Perf Configuration: 00:07:38.325 Workload Type: decompress 00:07:38.325 Transfer size: 111250 bytes 00:07:38.325 Vector count 1 00:07:38.325 Module: software 00:07:38.325 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.325 Queue depth: 32 00:07:38.325 Allocate depth: 32 00:07:38.325 # threads/core: 1 00:07:38.325 Run time: 1 seconds 00:07:38.325 Verify: Yes 00:07:38.325 00:07:38.325 Running for 1 seconds... 00:07:38.325 00:07:38.325 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.325 ------------------------------------------------------------------------------------ 00:07:38.325 0,0 5056/s 208 MiB/s 0 0 00:07:38.325 3,0 4960/s 204 MiB/s 0 0 00:07:38.325 2,0 5056/s 208 MiB/s 0 0 00:07:38.325 1,0 5024/s 207 MiB/s 0 0 00:07:38.325 ==================================================================================== 00:07:38.325 Total 20096/s 2132 MiB/s 0 0' 00:07:38.325 12:52:18 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.325 12:52:18 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.325 12:52:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.325 12:52:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.325 12:52:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.325 12:52:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.325 12:52:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.325 12:52:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.325 12:52:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.325 12:52:18 -- accel/accel.sh@42 -- # jq -r . 00:07:38.325 [2024-12-13 12:52:18.800931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.325 [2024-12-13 12:52:18.801162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71031 ] 00:07:38.325 [2024-12-13 12:52:18.935973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.325 [2024-12-13 12:52:18.994604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.325 [2024-12-13 12:52:18.994705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.325 [2024-12-13 12:52:18.994797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.325 [2024-12-13 12:52:18.995137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val= 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val= 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val= 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val=0xf 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val= 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val= 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val=decompress 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val= 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val=software 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val=32 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val=32 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val=1 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val=Yes 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val= 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:38.325 12:52:19 -- accel/accel.sh@21 -- # val= 00:07:38.325 12:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:38.325 12:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@21 -- # val= 00:07:39.704 12:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@21 -- # val= 00:07:39.704 12:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@21 -- # val= 00:07:39.704 12:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@21 -- # val= 00:07:39.704 12:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@21 -- # val= 00:07:39.704 12:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@21 -- # val= 00:07:39.704 12:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@21 -- # val= 00:07:39.704 12:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@21 -- # val= 00:07:39.704 12:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@21 -- # val= 00:07:39.704 12:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 12:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 12:52:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.704 12:52:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.704 12:52:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.704 ************************************ 00:07:39.704 END TEST accel_decomp_full_mcore 00:07:39.704 ************************************ 00:07:39.704 00:07:39.704 real 0m2.843s 00:07:39.704 user 0m9.236s 00:07:39.704 sys 0m0.251s 00:07:39.704 12:52:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.704 12:52:20 -- common/autotest_common.sh@10 -- # set +x 00:07:39.704 12:52:20 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.704 12:52:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:39.704 12:52:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.704 12:52:20 -- common/autotest_common.sh@10 -- # set +x 00:07:39.704 ************************************ 00:07:39.704 START TEST accel_decomp_mthread 00:07:39.704 ************************************ 00:07:39.704 12:52:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.704 12:52:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.704 12:52:20 -- accel/accel.sh@17 -- # local accel_module 00:07:39.704 12:52:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.704 12:52:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.704 12:52:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.704 12:52:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.704 12:52:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.704 12:52:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.704 12:52:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.704 12:52:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.704 12:52:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.704 12:52:20 -- accel/accel.sh@42 -- # jq -r . 00:07:39.704 [2024-12-13 12:52:20.285405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.704 [2024-12-13 12:52:20.285531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71073 ] 00:07:39.704 [2024-12-13 12:52:20.422192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.704 [2024-12-13 12:52:20.471357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.080 12:52:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.080 00:07:41.080 SPDK Configuration: 00:07:41.080 Core mask: 0x1 00:07:41.080 00:07:41.080 Accel Perf Configuration: 00:07:41.080 Workload Type: decompress 00:07:41.080 Transfer size: 4096 bytes 00:07:41.080 Vector count 1 00:07:41.080 Module: software 00:07:41.080 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.080 Queue depth: 32 00:07:41.080 Allocate depth: 32 00:07:41.080 # threads/core: 2 00:07:41.080 Run time: 1 seconds 00:07:41.080 Verify: Yes 00:07:41.080 00:07:41.080 Running for 1 seconds... 00:07:41.080 00:07:41.080 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.080 ------------------------------------------------------------------------------------ 00:07:41.080 0,1 42048/s 77 MiB/s 0 0 00:07:41.080 0,0 41888/s 77 MiB/s 0 0 00:07:41.080 ==================================================================================== 00:07:41.080 Total 83936/s 327 MiB/s 0 0' 00:07:41.080 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.080 12:52:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.080 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.080 12:52:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.080 12:52:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.080 12:52:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.080 12:52:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.080 12:52:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.080 12:52:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.080 12:52:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.080 12:52:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.080 12:52:21 -- accel/accel.sh@42 -- # jq -r . 00:07:41.080 [2024-12-13 12:52:21.683028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.080 [2024-12-13 12:52:21.683328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71088 ] 00:07:41.080 [2024-12-13 12:52:21.810902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.340 [2024-12-13 12:52:21.863638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val= 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val= 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val= 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val=0x1 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val= 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val= 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val=decompress 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val= 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val=software 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val=32 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val=32 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val=2 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val=Yes 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val= 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.340 12:52:21 -- accel/accel.sh@21 -- # val= 00:07:41.340 12:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.340 12:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:42.277 12:52:23 -- accel/accel.sh@21 -- # val= 00:07:42.277 12:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.277 12:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:42.277 12:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:42.277 12:52:23 -- accel/accel.sh@21 -- # val= 00:07:42.277 12:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.277 12:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:42.277 12:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:42.277 12:52:23 -- accel/accel.sh@21 -- # val= 00:07:42.277 12:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.277 12:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:42.536 12:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:42.536 12:52:23 -- accel/accel.sh@21 -- # val= 00:07:42.536 12:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.536 12:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:42.536 12:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:42.536 12:52:23 -- accel/accel.sh@21 -- # val= 00:07:42.536 12:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.536 12:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:42.536 12:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:42.536 12:52:23 -- accel/accel.sh@21 -- # val= 00:07:42.536 12:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.536 12:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:42.536 12:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:42.536 12:52:23 -- accel/accel.sh@21 -- # val= 00:07:42.536 12:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.536 12:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:42.536 12:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:42.536 12:52:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.536 12:52:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:42.536 12:52:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.536 00:07:42.536 real 0m2.795s 00:07:42.536 user 0m2.387s 00:07:42.536 sys 0m0.208s 00:07:42.536 12:52:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.536 12:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.536 ************************************ 00:07:42.536 END TEST accel_decomp_mthread 00:07:42.536 ************************************ 00:07:42.536 12:52:23 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.536 12:52:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:42.536 12:52:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.536 12:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.536 ************************************ 00:07:42.536 START TEST accel_deomp_full_mthread 00:07:42.536 ************************************ 00:07:42.536 12:52:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.536 12:52:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.536 12:52:23 -- accel/accel.sh@17 -- # local accel_module 00:07:42.536 12:52:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.536 12:52:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.536 12:52:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.536 12:52:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.536 12:52:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.536 12:52:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.536 12:52:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.536 12:52:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.536 12:52:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.536 12:52:23 -- accel/accel.sh@42 -- # jq -r . 00:07:42.536 [2024-12-13 12:52:23.134597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.536 [2024-12-13 12:52:23.134690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71123 ] 00:07:42.536 [2024-12-13 12:52:23.271146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.795 [2024-12-13 12:52:23.333463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.172 12:52:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.172 00:07:44.172 SPDK Configuration: 00:07:44.172 Core mask: 0x1 00:07:44.172 00:07:44.172 Accel Perf Configuration: 00:07:44.172 Workload Type: decompress 00:07:44.172 Transfer size: 111250 bytes 00:07:44.172 Vector count 1 00:07:44.172 Module: software 00:07:44.172 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.172 Queue depth: 32 00:07:44.172 Allocate depth: 32 00:07:44.172 # threads/core: 2 00:07:44.172 Run time: 1 seconds 00:07:44.172 Verify: Yes 00:07:44.172 00:07:44.172 Running for 1 seconds... 00:07:44.172 00:07:44.172 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.172 ------------------------------------------------------------------------------------ 00:07:44.172 0,1 2816/s 116 MiB/s 0 0 00:07:44.172 0,0 2752/s 113 MiB/s 0 0 00:07:44.172 ==================================================================================== 00:07:44.172 Total 5568/s 590 MiB/s 0 0' 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.172 12:52:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.172 12:52:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.172 12:52:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.172 12:52:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.172 12:52:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.172 12:52:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.172 12:52:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.172 12:52:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.172 12:52:24 -- accel/accel.sh@42 -- # jq -r . 00:07:44.172 [2024-12-13 12:52:24.589634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.172 [2024-12-13 12:52:24.589922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71142 ] 00:07:44.172 [2024-12-13 12:52:24.724193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.172 [2024-12-13 12:52:24.781140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val= 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val= 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val= 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val=0x1 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val= 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val= 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val=decompress 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val= 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val=software 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val=32 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val=32 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val=2 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val=Yes 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val= 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.172 12:52:24 -- accel/accel.sh@21 -- # val= 00:07:44.172 12:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.172 12:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:45.549 12:52:25 -- accel/accel.sh@21 -- # val= 00:07:45.549 12:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.549 12:52:25 -- accel/accel.sh@21 -- # val= 00:07:45.549 12:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.549 12:52:25 -- accel/accel.sh@21 -- # val= 00:07:45.549 12:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.549 12:52:25 -- accel/accel.sh@21 -- # val= 00:07:45.549 12:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.549 12:52:25 -- accel/accel.sh@21 -- # val= 00:07:45.549 12:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.549 12:52:25 -- accel/accel.sh@21 -- # val= 00:07:45.549 12:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.549 12:52:25 -- accel/accel.sh@21 -- # val= 00:07:45.549 12:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.549 12:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.549 12:52:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.549 12:52:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:45.549 12:52:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.549 00:07:45.549 real 0m2.889s 00:07:45.549 user 0m2.477s 00:07:45.549 sys 0m0.212s 00:07:45.549 12:52:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.549 ************************************ 00:07:45.549 END TEST accel_deomp_full_mthread 00:07:45.549 ************************************ 00:07:45.549 12:52:25 -- common/autotest_common.sh@10 -- # set +x 00:07:45.549 12:52:26 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:45.549 12:52:26 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:45.549 12:52:26 -- accel/accel.sh@129 -- # build_accel_config 00:07:45.549 12:52:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.549 12:52:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:45.549 12:52:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.549 12:52:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.549 12:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.549 12:52:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.549 12:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:45.549 12:52:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.549 12:52:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.549 12:52:26 -- accel/accel.sh@42 -- # jq -r . 00:07:45.549 ************************************ 00:07:45.549 START TEST accel_dif_functional_tests 00:07:45.549 ************************************ 00:07:45.549 12:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:45.549 [2024-12-13 12:52:26.103773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.549 [2024-12-13 12:52:26.103868] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71178 ] 00:07:45.549 [2024-12-13 12:52:26.231467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.549 [2024-12-13 12:52:26.282191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.549 [2024-12-13 12:52:26.282329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.549 [2024-12-13 12:52:26.282332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.807 00:07:45.807 00:07:45.808 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.808 http://cunit.sourceforge.net/ 00:07:45.808 00:07:45.808 00:07:45.808 Suite: accel_dif 00:07:45.808 Test: verify: DIF generated, GUARD check ...passed 00:07:45.808 Test: verify: DIF generated, APPTAG check ...passed 00:07:45.808 Test: verify: DIF generated, REFTAG check ...passed 00:07:45.808 Test: verify: DIF not generated, GUARD check ...[2024-12-13 12:52:26.367109] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:45.808 [2024-12-13 12:52:26.367180] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:45.808 passed 00:07:45.808 Test: verify: DIF not generated, APPTAG check ...passed 00:07:45.808 Test: verify: DIF not generated, REFTAG check ...[2024-12-13 12:52:26.367220] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:45.808 [2024-12-13 12:52:26.367247] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:45.808 [2024-12-13 12:52:26.367271] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:45.808 passed 00:07:45.808 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:45.808 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-13 12:52:26.367299] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:45.808 passed 00:07:45.808 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:45.808 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-12-13 12:52:26.367506] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:45.808 passed 00:07:45.808 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:45.808 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-13 12:52:26.367842] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:45.808 passed 00:07:45.808 Test: generate copy: DIF generated, GUARD check ...passed 00:07:45.808 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:45.808 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:45.808 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:45.808 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:45.808 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:45.808 Test: generate copy: iovecs-len validate ...passed 00:07:45.808 Test: generate copy: buffer alignment validate ...[2024-12-13 12:52:26.368441] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:45.808 passed 00:07:45.808 00:07:45.808 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.808 suites 1 1 n/a 0 0 00:07:45.808 tests 20 20 20 0 0 00:07:45.808 asserts 204 204 204 0 n/a 00:07:45.808 00:07:45.808 Elapsed time = 0.005 seconds 00:07:45.808 00:07:45.808 real 0m0.504s 00:07:45.808 user 0m0.670s 00:07:45.808 sys 0m0.144s 00:07:45.808 12:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.808 ************************************ 00:07:45.808 END TEST accel_dif_functional_tests 00:07:45.808 ************************************ 00:07:45.808 12:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:46.067 00:07:46.067 real 1m0.285s 00:07:46.067 user 1m4.893s 00:07:46.067 sys 0m5.802s 00:07:46.067 12:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.067 ************************************ 00:07:46.067 END TEST accel 00:07:46.067 ************************************ 00:07:46.067 12:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:46.067 12:52:26 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:46.067 12:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.067 12:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.067 12:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:46.067 ************************************ 00:07:46.067 START TEST accel_rpc 00:07:46.067 ************************************ 00:07:46.067 12:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:46.067 * Looking for test storage... 00:07:46.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:46.067 12:52:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:46.067 12:52:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:46.067 12:52:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:46.067 12:52:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:46.067 12:52:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:46.067 12:52:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:46.067 12:52:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:46.067 12:52:26 -- scripts/common.sh@335 -- # IFS=.-: 00:07:46.067 12:52:26 -- scripts/common.sh@335 -- # read -ra ver1 00:07:46.067 12:52:26 -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.067 12:52:26 -- scripts/common.sh@336 -- # read -ra ver2 00:07:46.067 12:52:26 -- scripts/common.sh@337 -- # local 'op=<' 00:07:46.067 12:52:26 -- scripts/common.sh@339 -- # ver1_l=2 00:07:46.067 12:52:26 -- scripts/common.sh@340 -- # ver2_l=1 00:07:46.067 12:52:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:46.067 12:52:26 -- scripts/common.sh@343 -- # case "$op" in 00:07:46.067 12:52:26 -- scripts/common.sh@344 -- # : 1 00:07:46.067 12:52:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:46.067 12:52:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.067 12:52:26 -- scripts/common.sh@364 -- # decimal 1 00:07:46.067 12:52:26 -- scripts/common.sh@352 -- # local d=1 00:07:46.067 12:52:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.067 12:52:26 -- scripts/common.sh@354 -- # echo 1 00:07:46.067 12:52:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:46.067 12:52:26 -- scripts/common.sh@365 -- # decimal 2 00:07:46.067 12:52:26 -- scripts/common.sh@352 -- # local d=2 00:07:46.067 12:52:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.067 12:52:26 -- scripts/common.sh@354 -- # echo 2 00:07:46.067 12:52:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:46.067 12:52:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:46.067 12:52:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:46.067 12:52:26 -- scripts/common.sh@367 -- # return 0 00:07:46.067 12:52:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.067 12:52:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:46.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.067 --rc genhtml_branch_coverage=1 00:07:46.067 --rc genhtml_function_coverage=1 00:07:46.067 --rc genhtml_legend=1 00:07:46.067 --rc geninfo_all_blocks=1 00:07:46.067 --rc geninfo_unexecuted_blocks=1 00:07:46.067 00:07:46.067 ' 00:07:46.067 12:52:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:46.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.067 --rc genhtml_branch_coverage=1 00:07:46.067 --rc genhtml_function_coverage=1 00:07:46.067 --rc genhtml_legend=1 00:07:46.067 --rc geninfo_all_blocks=1 00:07:46.067 --rc geninfo_unexecuted_blocks=1 00:07:46.067 00:07:46.067 ' 00:07:46.067 12:52:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:46.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.067 --rc genhtml_branch_coverage=1 00:07:46.067 --rc genhtml_function_coverage=1 00:07:46.067 --rc genhtml_legend=1 00:07:46.067 --rc geninfo_all_blocks=1 00:07:46.067 --rc geninfo_unexecuted_blocks=1 00:07:46.067 00:07:46.067 ' 00:07:46.067 12:52:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:46.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.067 --rc genhtml_branch_coverage=1 00:07:46.067 --rc genhtml_function_coverage=1 00:07:46.067 --rc genhtml_legend=1 00:07:46.067 --rc geninfo_all_blocks=1 00:07:46.067 --rc geninfo_unexecuted_blocks=1 00:07:46.067 00:07:46.067 ' 00:07:46.067 12:52:26 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:46.067 12:52:26 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71249 00:07:46.067 12:52:26 -- accel/accel_rpc.sh@15 -- # waitforlisten 71249 00:07:46.067 12:52:26 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:46.067 12:52:26 -- common/autotest_common.sh@829 -- # '[' -z 71249 ']' 00:07:46.067 12:52:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.067 12:52:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.067 12:52:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.067 12:52:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.067 12:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:46.325 [2024-12-13 12:52:26.882984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.325 [2024-12-13 12:52:26.883081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71249 ] 00:07:46.326 [2024-12-13 12:52:27.018430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.326 [2024-12-13 12:52:27.072390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.326 [2024-12-13 12:52:27.072542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.585 12:52:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.585 12:52:27 -- common/autotest_common.sh@862 -- # return 0 00:07:46.585 12:52:27 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:46.585 12:52:27 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:46.585 12:52:27 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:46.585 12:52:27 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:46.585 12:52:27 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:46.585 12:52:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.585 12:52:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.585 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.585 ************************************ 00:07:46.585 START TEST accel_assign_opcode 00:07:46.585 ************************************ 00:07:46.585 12:52:27 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:46.585 12:52:27 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:46.585 12:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.585 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.585 [2024-12-13 12:52:27.144940] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:46.585 12:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.585 12:52:27 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:46.585 12:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.585 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.585 [2024-12-13 12:52:27.152941] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:46.585 12:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.585 12:52:27 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:46.585 12:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.585 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.844 12:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.844 12:52:27 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:46.844 12:52:27 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:46.844 12:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.844 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.844 12:52:27 -- accel/accel_rpc.sh@42 -- # grep software 00:07:46.844 12:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.844 software 00:07:46.844 ************************************ 00:07:46.844 END TEST accel_assign_opcode 00:07:46.844 ************************************ 00:07:46.844 00:07:46.844 real 0m0.275s 00:07:46.844 user 0m0.055s 00:07:46.844 sys 0m0.009s 00:07:46.844 12:52:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.844 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.844 12:52:27 -- accel/accel_rpc.sh@55 -- # killprocess 71249 00:07:46.844 12:52:27 -- common/autotest_common.sh@936 -- # '[' -z 71249 ']' 00:07:46.844 12:52:27 -- common/autotest_common.sh@940 -- # kill -0 71249 00:07:46.844 12:52:27 -- common/autotest_common.sh@941 -- # uname 00:07:46.844 12:52:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:46.844 12:52:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71249 00:07:46.844 killing process with pid 71249 00:07:46.844 12:52:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:46.844 12:52:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:46.844 12:52:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71249' 00:07:46.844 12:52:27 -- common/autotest_common.sh@955 -- # kill 71249 00:07:46.844 12:52:27 -- common/autotest_common.sh@960 -- # wait 71249 00:07:47.103 00:07:47.103 real 0m1.207s 00:07:47.103 user 0m1.108s 00:07:47.103 sys 0m0.439s 00:07:47.103 12:52:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.103 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:07:47.103 ************************************ 00:07:47.103 END TEST accel_rpc 00:07:47.103 ************************************ 00:07:47.362 12:52:27 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.362 12:52:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.362 12:52:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.362 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:07:47.362 ************************************ 00:07:47.362 START TEST app_cmdline 00:07:47.362 ************************************ 00:07:47.362 12:52:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.362 * Looking for test storage... 00:07:47.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:47.362 12:52:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:47.362 12:52:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:47.362 12:52:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:47.362 12:52:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:47.362 12:52:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:47.362 12:52:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:47.362 12:52:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:47.362 12:52:28 -- scripts/common.sh@335 -- # IFS=.-: 00:07:47.362 12:52:28 -- scripts/common.sh@335 -- # read -ra ver1 00:07:47.362 12:52:28 -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.362 12:52:28 -- scripts/common.sh@336 -- # read -ra ver2 00:07:47.362 12:52:28 -- scripts/common.sh@337 -- # local 'op=<' 00:07:47.362 12:52:28 -- scripts/common.sh@339 -- # ver1_l=2 00:07:47.362 12:52:28 -- scripts/common.sh@340 -- # ver2_l=1 00:07:47.362 12:52:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:47.362 12:52:28 -- scripts/common.sh@343 -- # case "$op" in 00:07:47.362 12:52:28 -- scripts/common.sh@344 -- # : 1 00:07:47.362 12:52:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:47.362 12:52:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.362 12:52:28 -- scripts/common.sh@364 -- # decimal 1 00:07:47.362 12:52:28 -- scripts/common.sh@352 -- # local d=1 00:07:47.362 12:52:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.362 12:52:28 -- scripts/common.sh@354 -- # echo 1 00:07:47.362 12:52:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:47.362 12:52:28 -- scripts/common.sh@365 -- # decimal 2 00:07:47.362 12:52:28 -- scripts/common.sh@352 -- # local d=2 00:07:47.362 12:52:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.362 12:52:28 -- scripts/common.sh@354 -- # echo 2 00:07:47.362 12:52:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:47.362 12:52:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:47.362 12:52:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:47.362 12:52:28 -- scripts/common.sh@367 -- # return 0 00:07:47.362 12:52:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.362 12:52:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:47.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.362 --rc genhtml_branch_coverage=1 00:07:47.362 --rc genhtml_function_coverage=1 00:07:47.362 --rc genhtml_legend=1 00:07:47.362 --rc geninfo_all_blocks=1 00:07:47.362 --rc geninfo_unexecuted_blocks=1 00:07:47.362 00:07:47.362 ' 00:07:47.362 12:52:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:47.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.362 --rc genhtml_branch_coverage=1 00:07:47.362 --rc genhtml_function_coverage=1 00:07:47.362 --rc genhtml_legend=1 00:07:47.362 --rc geninfo_all_blocks=1 00:07:47.362 --rc geninfo_unexecuted_blocks=1 00:07:47.362 00:07:47.362 ' 00:07:47.362 12:52:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:47.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.362 --rc genhtml_branch_coverage=1 00:07:47.362 --rc genhtml_function_coverage=1 00:07:47.362 --rc genhtml_legend=1 00:07:47.362 --rc geninfo_all_blocks=1 00:07:47.362 --rc geninfo_unexecuted_blocks=1 00:07:47.362 00:07:47.362 ' 00:07:47.362 12:52:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:47.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.362 --rc genhtml_branch_coverage=1 00:07:47.362 --rc genhtml_function_coverage=1 00:07:47.362 --rc genhtml_legend=1 00:07:47.362 --rc geninfo_all_blocks=1 00:07:47.362 --rc geninfo_unexecuted_blocks=1 00:07:47.362 00:07:47.362 ' 00:07:47.362 12:52:28 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:47.362 12:52:28 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71354 00:07:47.362 12:52:28 -- app/cmdline.sh@18 -- # waitforlisten 71354 00:07:47.362 12:52:28 -- common/autotest_common.sh@829 -- # '[' -z 71354 ']' 00:07:47.362 12:52:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.362 12:52:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.362 12:52:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.362 12:52:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.362 12:52:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.362 12:52:28 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:47.621 [2024-12-13 12:52:28.143392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.621 [2024-12-13 12:52:28.143488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71354 ] 00:07:47.621 [2024-12-13 12:52:28.279628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.621 [2024-12-13 12:52:28.333219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.621 [2024-12-13 12:52:28.333392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.557 12:52:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.557 12:52:29 -- common/autotest_common.sh@862 -- # return 0 00:07:48.557 12:52:29 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:48.815 { 00:07:48.815 "fields": { 00:07:48.815 "commit": "c13c99a5e", 00:07:48.815 "major": 24, 00:07:48.815 "minor": 1, 00:07:48.815 "patch": 1, 00:07:48.815 "suffix": "-pre" 00:07:48.815 }, 00:07:48.815 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:48.815 } 00:07:48.815 12:52:29 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:48.815 12:52:29 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:48.815 12:52:29 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:48.815 12:52:29 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:48.815 12:52:29 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:48.815 12:52:29 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:48.815 12:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.815 12:52:29 -- common/autotest_common.sh@10 -- # set +x 00:07:48.815 12:52:29 -- app/cmdline.sh@26 -- # sort 00:07:48.815 12:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.815 12:52:29 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:48.815 12:52:29 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:48.815 12:52:29 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.815 12:52:29 -- common/autotest_common.sh@650 -- # local es=0 00:07:48.815 12:52:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.815 12:52:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.815 12:52:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.815 12:52:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.815 12:52:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.815 12:52:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.815 12:52:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.815 12:52:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.815 12:52:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:48.815 12:52:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.074 2024/12/13 12:52:29 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:49.074 request: 00:07:49.074 { 00:07:49.074 "method": "env_dpdk_get_mem_stats", 00:07:49.074 "params": {} 00:07:49.074 } 00:07:49.074 Got JSON-RPC error response 00:07:49.074 GoRPCClient: error on JSON-RPC call 00:07:49.074 12:52:29 -- common/autotest_common.sh@653 -- # es=1 00:07:49.074 12:52:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.074 12:52:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.074 12:52:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.074 12:52:29 -- app/cmdline.sh@1 -- # killprocess 71354 00:07:49.074 12:52:29 -- common/autotest_common.sh@936 -- # '[' -z 71354 ']' 00:07:49.074 12:52:29 -- common/autotest_common.sh@940 -- # kill -0 71354 00:07:49.074 12:52:29 -- common/autotest_common.sh@941 -- # uname 00:07:49.074 12:52:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:49.074 12:52:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71354 00:07:49.074 12:52:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:49.074 12:52:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:49.074 killing process with pid 71354 00:07:49.074 12:52:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71354' 00:07:49.074 12:52:29 -- common/autotest_common.sh@955 -- # kill 71354 00:07:49.074 12:52:29 -- common/autotest_common.sh@960 -- # wait 71354 00:07:49.333 00:07:49.333 real 0m2.142s 00:07:49.333 user 0m2.668s 00:07:49.333 sys 0m0.470s 00:07:49.333 12:52:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.333 12:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.333 ************************************ 00:07:49.333 END TEST app_cmdline 00:07:49.333 ************************************ 00:07:49.333 12:52:30 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:49.333 12:52:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.333 12:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.333 12:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.333 ************************************ 00:07:49.333 START TEST version 00:07:49.333 ************************************ 00:07:49.333 12:52:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:49.592 * Looking for test storage... 00:07:49.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:49.592 12:52:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.592 12:52:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.592 12:52:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.592 12:52:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.592 12:52:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.592 12:52:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.592 12:52:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.592 12:52:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.592 12:52:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.592 12:52:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.592 12:52:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.592 12:52:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.592 12:52:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.592 12:52:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.592 12:52:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.592 12:52:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.592 12:52:30 -- scripts/common.sh@344 -- # : 1 00:07:49.592 12:52:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.592 12:52:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.592 12:52:30 -- scripts/common.sh@364 -- # decimal 1 00:07:49.592 12:52:30 -- scripts/common.sh@352 -- # local d=1 00:07:49.592 12:52:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.592 12:52:30 -- scripts/common.sh@354 -- # echo 1 00:07:49.592 12:52:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.592 12:52:30 -- scripts/common.sh@365 -- # decimal 2 00:07:49.592 12:52:30 -- scripts/common.sh@352 -- # local d=2 00:07:49.592 12:52:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.592 12:52:30 -- scripts/common.sh@354 -- # echo 2 00:07:49.592 12:52:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.592 12:52:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.592 12:52:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.592 12:52:30 -- scripts/common.sh@367 -- # return 0 00:07:49.592 12:52:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.592 12:52:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.592 --rc genhtml_branch_coverage=1 00:07:49.592 --rc genhtml_function_coverage=1 00:07:49.592 --rc genhtml_legend=1 00:07:49.592 --rc geninfo_all_blocks=1 00:07:49.592 --rc geninfo_unexecuted_blocks=1 00:07:49.592 00:07:49.592 ' 00:07:49.592 12:52:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.592 --rc genhtml_branch_coverage=1 00:07:49.592 --rc genhtml_function_coverage=1 00:07:49.592 --rc genhtml_legend=1 00:07:49.592 --rc geninfo_all_blocks=1 00:07:49.592 --rc geninfo_unexecuted_blocks=1 00:07:49.592 00:07:49.592 ' 00:07:49.592 12:52:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.592 --rc genhtml_branch_coverage=1 00:07:49.592 --rc genhtml_function_coverage=1 00:07:49.592 --rc genhtml_legend=1 00:07:49.592 --rc geninfo_all_blocks=1 00:07:49.592 --rc geninfo_unexecuted_blocks=1 00:07:49.592 00:07:49.592 ' 00:07:49.593 12:52:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.593 --rc genhtml_branch_coverage=1 00:07:49.593 --rc genhtml_function_coverage=1 00:07:49.593 --rc genhtml_legend=1 00:07:49.593 --rc geninfo_all_blocks=1 00:07:49.593 --rc geninfo_unexecuted_blocks=1 00:07:49.593 00:07:49.593 ' 00:07:49.593 12:52:30 -- app/version.sh@17 -- # get_header_version major 00:07:49.593 12:52:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.593 12:52:30 -- app/version.sh@14 -- # cut -f2 00:07:49.593 12:52:30 -- app/version.sh@14 -- # tr -d '"' 00:07:49.593 12:52:30 -- app/version.sh@17 -- # major=24 00:07:49.593 12:52:30 -- app/version.sh@18 -- # get_header_version minor 00:07:49.593 12:52:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.593 12:52:30 -- app/version.sh@14 -- # cut -f2 00:07:49.593 12:52:30 -- app/version.sh@14 -- # tr -d '"' 00:07:49.593 12:52:30 -- app/version.sh@18 -- # minor=1 00:07:49.593 12:52:30 -- app/version.sh@19 -- # get_header_version patch 00:07:49.593 12:52:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.593 12:52:30 -- app/version.sh@14 -- # cut -f2 00:07:49.593 12:52:30 -- app/version.sh@14 -- # tr -d '"' 00:07:49.593 12:52:30 -- app/version.sh@19 -- # patch=1 00:07:49.593 12:52:30 -- app/version.sh@20 -- # get_header_version suffix 00:07:49.593 12:52:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.593 12:52:30 -- app/version.sh@14 -- # tr -d '"' 00:07:49.593 12:52:30 -- app/version.sh@14 -- # cut -f2 00:07:49.593 12:52:30 -- app/version.sh@20 -- # suffix=-pre 00:07:49.593 12:52:30 -- app/version.sh@22 -- # version=24.1 00:07:49.593 12:52:30 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:49.593 12:52:30 -- app/version.sh@25 -- # version=24.1.1 00:07:49.593 12:52:30 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:49.593 12:52:30 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:49.593 12:52:30 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:49.593 12:52:30 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:49.593 12:52:30 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:49.593 00:07:49.593 real 0m0.214s 00:07:49.593 user 0m0.143s 00:07:49.593 sys 0m0.108s 00:07:49.593 12:52:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.593 12:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.593 ************************************ 00:07:49.593 END TEST version 00:07:49.593 ************************************ 00:07:49.593 12:52:30 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:49.593 12:52:30 -- spdk/autotest.sh@191 -- # uname -s 00:07:49.593 12:52:30 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:49.593 12:52:30 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:49.593 12:52:30 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:49.593 12:52:30 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:49.593 12:52:30 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:49.593 12:52:30 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:49.593 12:52:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.593 12:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.852 12:52:30 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:49.852 12:52:30 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:49.852 12:52:30 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:49.852 12:52:30 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:49.852 12:52:30 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:49.852 12:52:30 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:49.852 12:52:30 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:49.852 12:52:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:49.852 12:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.852 12:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.852 ************************************ 00:07:49.852 START TEST nvmf_tcp 00:07:49.852 ************************************ 00:07:49.852 12:52:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:49.852 * Looking for test storage... 00:07:49.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:49.852 12:52:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.852 12:52:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.852 12:52:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.852 12:52:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.852 12:52:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.852 12:52:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.852 12:52:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.852 12:52:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.852 12:52:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.852 12:52:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.852 12:52:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.852 12:52:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.852 12:52:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.852 12:52:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.852 12:52:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.852 12:52:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.852 12:52:30 -- scripts/common.sh@344 -- # : 1 00:07:49.852 12:52:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.852 12:52:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.852 12:52:30 -- scripts/common.sh@364 -- # decimal 1 00:07:49.852 12:52:30 -- scripts/common.sh@352 -- # local d=1 00:07:49.852 12:52:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.852 12:52:30 -- scripts/common.sh@354 -- # echo 1 00:07:49.852 12:52:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.852 12:52:30 -- scripts/common.sh@365 -- # decimal 2 00:07:49.852 12:52:30 -- scripts/common.sh@352 -- # local d=2 00:07:49.852 12:52:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.852 12:52:30 -- scripts/common.sh@354 -- # echo 2 00:07:49.852 12:52:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.852 12:52:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.852 12:52:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.852 12:52:30 -- scripts/common.sh@367 -- # return 0 00:07:49.852 12:52:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.852 12:52:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.852 --rc genhtml_branch_coverage=1 00:07:49.852 --rc genhtml_function_coverage=1 00:07:49.852 --rc genhtml_legend=1 00:07:49.852 --rc geninfo_all_blocks=1 00:07:49.852 --rc geninfo_unexecuted_blocks=1 00:07:49.852 00:07:49.852 ' 00:07:49.852 12:52:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.852 --rc genhtml_branch_coverage=1 00:07:49.852 --rc genhtml_function_coverage=1 00:07:49.852 --rc genhtml_legend=1 00:07:49.852 --rc geninfo_all_blocks=1 00:07:49.852 --rc geninfo_unexecuted_blocks=1 00:07:49.852 00:07:49.852 ' 00:07:49.852 12:52:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.852 --rc genhtml_branch_coverage=1 00:07:49.852 --rc genhtml_function_coverage=1 00:07:49.852 --rc genhtml_legend=1 00:07:49.852 --rc geninfo_all_blocks=1 00:07:49.852 --rc geninfo_unexecuted_blocks=1 00:07:49.852 00:07:49.852 ' 00:07:49.852 12:52:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.852 --rc genhtml_branch_coverage=1 00:07:49.852 --rc genhtml_function_coverage=1 00:07:49.852 --rc genhtml_legend=1 00:07:49.852 --rc geninfo_all_blocks=1 00:07:49.852 --rc geninfo_unexecuted_blocks=1 00:07:49.852 00:07:49.852 ' 00:07:49.852 12:52:30 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:49.852 12:52:30 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:49.852 12:52:30 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.852 12:52:30 -- nvmf/common.sh@7 -- # uname -s 00:07:49.852 12:52:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.852 12:52:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.852 12:52:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.852 12:52:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.852 12:52:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.852 12:52:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.852 12:52:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.852 12:52:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.852 12:52:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.852 12:52:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.852 12:52:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:07:49.852 12:52:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:07:49.852 12:52:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.852 12:52:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.852 12:52:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:49.852 12:52:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.852 12:52:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.852 12:52:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.852 12:52:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.852 12:52:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.852 12:52:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.852 12:52:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.852 12:52:30 -- paths/export.sh@5 -- # export PATH 00:07:49.852 12:52:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.852 12:52:30 -- nvmf/common.sh@46 -- # : 0 00:07:49.852 12:52:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:49.852 12:52:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:49.852 12:52:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:49.852 12:52:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.852 12:52:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.852 12:52:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:49.852 12:52:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:49.852 12:52:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:49.852 12:52:30 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:49.852 12:52:30 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:49.852 12:52:30 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:49.852 12:52:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.852 12:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.852 12:52:30 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:49.852 12:52:30 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:49.852 12:52:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:49.852 12:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.852 12:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.112 ************************************ 00:07:50.112 START TEST nvmf_example 00:07:50.112 ************************************ 00:07:50.112 12:52:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:50.112 * Looking for test storage... 00:07:50.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.112 12:52:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:50.112 12:52:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:50.112 12:52:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:50.112 12:52:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:50.112 12:52:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:50.112 12:52:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:50.112 12:52:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:50.112 12:52:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:50.112 12:52:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:50.112 12:52:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.112 12:52:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:50.112 12:52:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:50.112 12:52:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:50.112 12:52:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:50.112 12:52:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:50.112 12:52:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:50.112 12:52:30 -- scripts/common.sh@344 -- # : 1 00:07:50.112 12:52:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:50.112 12:52:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.112 12:52:30 -- scripts/common.sh@364 -- # decimal 1 00:07:50.112 12:52:30 -- scripts/common.sh@352 -- # local d=1 00:07:50.112 12:52:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.112 12:52:30 -- scripts/common.sh@354 -- # echo 1 00:07:50.112 12:52:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:50.112 12:52:30 -- scripts/common.sh@365 -- # decimal 2 00:07:50.112 12:52:30 -- scripts/common.sh@352 -- # local d=2 00:07:50.112 12:52:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.112 12:52:30 -- scripts/common.sh@354 -- # echo 2 00:07:50.112 12:52:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:50.112 12:52:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:50.112 12:52:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:50.112 12:52:30 -- scripts/common.sh@367 -- # return 0 00:07:50.112 12:52:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.112 12:52:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:50.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.112 --rc genhtml_branch_coverage=1 00:07:50.112 --rc genhtml_function_coverage=1 00:07:50.112 --rc genhtml_legend=1 00:07:50.112 --rc geninfo_all_blocks=1 00:07:50.112 --rc geninfo_unexecuted_blocks=1 00:07:50.112 00:07:50.112 ' 00:07:50.112 12:52:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:50.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.112 --rc genhtml_branch_coverage=1 00:07:50.112 --rc genhtml_function_coverage=1 00:07:50.112 --rc genhtml_legend=1 00:07:50.112 --rc geninfo_all_blocks=1 00:07:50.112 --rc geninfo_unexecuted_blocks=1 00:07:50.112 00:07:50.112 ' 00:07:50.112 12:52:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:50.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.112 --rc genhtml_branch_coverage=1 00:07:50.112 --rc genhtml_function_coverage=1 00:07:50.112 --rc genhtml_legend=1 00:07:50.112 --rc geninfo_all_blocks=1 00:07:50.112 --rc geninfo_unexecuted_blocks=1 00:07:50.112 00:07:50.112 ' 00:07:50.112 12:52:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:50.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.112 --rc genhtml_branch_coverage=1 00:07:50.112 --rc genhtml_function_coverage=1 00:07:50.112 --rc genhtml_legend=1 00:07:50.112 --rc geninfo_all_blocks=1 00:07:50.112 --rc geninfo_unexecuted_blocks=1 00:07:50.112 00:07:50.112 ' 00:07:50.112 12:52:30 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:50.112 12:52:30 -- nvmf/common.sh@7 -- # uname -s 00:07:50.112 12:52:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.112 12:52:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.112 12:52:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.112 12:52:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.112 12:52:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.112 12:52:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.112 12:52:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.112 12:52:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.112 12:52:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.112 12:52:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.112 12:52:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:07:50.112 12:52:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:07:50.112 12:52:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.112 12:52:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.112 12:52:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:50.112 12:52:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.112 12:52:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.112 12:52:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.112 12:52:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.112 12:52:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.112 12:52:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.112 12:52:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.112 12:52:30 -- paths/export.sh@5 -- # export PATH 00:07:50.112 12:52:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.112 12:52:30 -- nvmf/common.sh@46 -- # : 0 00:07:50.112 12:52:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:50.112 12:52:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:50.112 12:52:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:50.112 12:52:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.112 12:52:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.112 12:52:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:50.112 12:52:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:50.113 12:52:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:50.113 12:52:30 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:50.113 12:52:30 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:50.113 12:52:30 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:50.113 12:52:30 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:50.113 12:52:30 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:50.113 12:52:30 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:50.113 12:52:30 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:50.113 12:52:30 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:50.113 12:52:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.113 12:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.113 12:52:30 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:50.113 12:52:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:50.113 12:52:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.113 12:52:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:50.113 12:52:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:50.113 12:52:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:50.113 12:52:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.113 12:52:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.113 12:52:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.113 12:52:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:50.113 12:52:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:50.113 12:52:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:50.113 12:52:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:50.113 12:52:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:50.113 12:52:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:50.113 12:52:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.113 12:52:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.113 12:52:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:50.113 12:52:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:50.113 12:52:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:50.113 12:52:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:50.113 12:52:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:50.113 12:52:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.113 12:52:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:50.113 12:52:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:50.113 12:52:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:50.113 12:52:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:50.113 12:52:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:50.113 Cannot find device "nvmf_init_br" 00:07:50.113 12:52:30 -- nvmf/common.sh@153 -- # true 00:07:50.113 12:52:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:50.113 Cannot find device "nvmf_tgt_br" 00:07:50.113 12:52:30 -- nvmf/common.sh@154 -- # true 00:07:50.113 12:52:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:50.113 Cannot find device "nvmf_tgt_br2" 00:07:50.113 12:52:30 -- nvmf/common.sh@155 -- # true 00:07:50.113 12:52:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:50.113 Cannot find device "nvmf_init_br" 00:07:50.113 12:52:30 -- nvmf/common.sh@156 -- # true 00:07:50.113 12:52:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:50.113 Cannot find device "nvmf_tgt_br" 00:07:50.113 12:52:30 -- nvmf/common.sh@157 -- # true 00:07:50.113 12:52:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:50.371 Cannot find device "nvmf_tgt_br2" 00:07:50.371 12:52:30 -- nvmf/common.sh@158 -- # true 00:07:50.371 12:52:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:50.371 Cannot find device "nvmf_br" 00:07:50.371 12:52:30 -- nvmf/common.sh@159 -- # true 00:07:50.371 12:52:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:50.371 Cannot find device "nvmf_init_if" 00:07:50.371 12:52:30 -- nvmf/common.sh@160 -- # true 00:07:50.371 12:52:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:50.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.371 12:52:30 -- nvmf/common.sh@161 -- # true 00:07:50.371 12:52:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:50.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.371 12:52:30 -- nvmf/common.sh@162 -- # true 00:07:50.371 12:52:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:50.371 12:52:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:50.371 12:52:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:50.371 12:52:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:50.371 12:52:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:50.371 12:52:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:50.371 12:52:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:50.371 12:52:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:50.371 12:52:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:50.371 12:52:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:50.371 12:52:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:50.371 12:52:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:50.371 12:52:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:50.371 12:52:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:50.371 12:52:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:50.371 12:52:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:50.371 12:52:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:50.371 12:52:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:50.371 12:52:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:50.371 12:52:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:50.371 12:52:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:50.371 12:52:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:50.630 12:52:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:50.630 12:52:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:50.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:07:50.630 00:07:50.630 --- 10.0.0.2 ping statistics --- 00:07:50.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.630 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:50.630 12:52:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:50.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:50.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:07:50.630 00:07:50.630 --- 10.0.0.3 ping statistics --- 00:07:50.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.630 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:50.630 12:52:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:50.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:50.630 00:07:50.630 --- 10.0.0.1 ping statistics --- 00:07:50.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.630 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:50.630 12:52:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.630 12:52:31 -- nvmf/common.sh@421 -- # return 0 00:07:50.630 12:52:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:50.630 12:52:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.630 12:52:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:50.630 12:52:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:50.630 12:52:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.630 12:52:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:50.630 12:52:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:50.630 12:52:31 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:50.630 12:52:31 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:50.630 12:52:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.630 12:52:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.630 12:52:31 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:50.630 12:52:31 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:50.630 12:52:31 -- target/nvmf_example.sh@34 -- # nvmfpid=71723 00:07:50.630 12:52:31 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.630 12:52:31 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:50.630 12:52:31 -- target/nvmf_example.sh@36 -- # waitforlisten 71723 00:07:50.630 12:52:31 -- common/autotest_common.sh@829 -- # '[' -z 71723 ']' 00:07:50.630 12:52:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.630 12:52:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.630 12:52:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.630 12:52:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.630 12:52:31 -- common/autotest_common.sh@10 -- # set +x 00:07:51.595 12:52:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.595 12:52:32 -- common/autotest_common.sh@862 -- # return 0 00:07:51.595 12:52:32 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:51.595 12:52:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.595 12:52:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.595 12:52:32 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.595 12:52:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.595 12:52:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.595 12:52:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.595 12:52:32 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:51.595 12:52:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.595 12:52:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.595 12:52:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.595 12:52:32 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:51.595 12:52:32 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.595 12:52:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.595 12:52:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.595 12:52:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.595 12:52:32 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:51.595 12:52:32 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.595 12:52:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.595 12:52:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.595 12:52:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.595 12:52:32 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.595 12:52:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.595 12:52:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.595 12:52:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.595 12:52:32 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:51.595 12:52:32 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:03.813 Initializing NVMe Controllers 00:08:03.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:03.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:03.813 Initialization complete. Launching workers. 00:08:03.813 ======================================================== 00:08:03.813 Latency(us) 00:08:03.813 Device Information : IOPS MiB/s Average min max 00:08:03.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16565.52 64.71 3862.97 561.30 21757.35 00:08:03.813 ======================================================== 00:08:03.813 Total : 16565.52 64.71 3862.97 561.30 21757.35 00:08:03.813 00:08:03.813 12:52:42 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:03.813 12:52:42 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:03.813 12:52:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:03.813 12:52:42 -- nvmf/common.sh@116 -- # sync 00:08:03.813 12:52:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:03.813 12:52:42 -- nvmf/common.sh@119 -- # set +e 00:08:03.813 12:52:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:03.813 12:52:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:03.813 rmmod nvme_tcp 00:08:03.813 rmmod nvme_fabrics 00:08:03.813 rmmod nvme_keyring 00:08:03.813 12:52:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:03.813 12:52:42 -- nvmf/common.sh@123 -- # set -e 00:08:03.813 12:52:42 -- nvmf/common.sh@124 -- # return 0 00:08:03.813 12:52:42 -- nvmf/common.sh@477 -- # '[' -n 71723 ']' 00:08:03.813 12:52:42 -- nvmf/common.sh@478 -- # killprocess 71723 00:08:03.813 12:52:42 -- common/autotest_common.sh@936 -- # '[' -z 71723 ']' 00:08:03.813 12:52:42 -- common/autotest_common.sh@940 -- # kill -0 71723 00:08:03.813 12:52:42 -- common/autotest_common.sh@941 -- # uname 00:08:03.813 12:52:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:03.813 12:52:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71723 00:08:03.813 12:52:42 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:03.813 12:52:42 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:03.813 killing process with pid 71723 00:08:03.813 12:52:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71723' 00:08:03.813 12:52:42 -- common/autotest_common.sh@955 -- # kill 71723 00:08:03.813 12:52:42 -- common/autotest_common.sh@960 -- # wait 71723 00:08:03.813 nvmf threads initialize successfully 00:08:03.813 bdev subsystem init successfully 00:08:03.813 created a nvmf target service 00:08:03.813 create targets's poll groups done 00:08:03.813 all subsystems of target started 00:08:03.813 nvmf target is running 00:08:03.813 all subsystems of target stopped 00:08:03.813 destroy targets's poll groups done 00:08:03.813 destroyed the nvmf target service 00:08:03.813 bdev subsystem finish successfully 00:08:03.813 nvmf threads destroy successfully 00:08:03.813 12:52:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:03.813 12:52:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:03.813 12:52:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:03.813 12:52:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.813 12:52:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:03.813 12:52:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.813 12:52:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.813 12:52:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.813 12:52:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:03.813 12:52:42 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:03.813 12:52:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.813 12:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:03.813 00:08:03.813 real 0m12.296s 00:08:03.813 user 0m44.178s 00:08:03.813 sys 0m1.969s 00:08:03.813 12:52:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.813 12:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:03.813 ************************************ 00:08:03.813 END TEST nvmf_example 00:08:03.813 ************************************ 00:08:03.813 12:52:42 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.813 12:52:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:03.813 12:52:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.813 12:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:03.813 ************************************ 00:08:03.813 START TEST nvmf_filesystem 00:08:03.813 ************************************ 00:08:03.813 12:52:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.813 * Looking for test storage... 00:08:03.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.813 12:52:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:03.813 12:52:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:03.813 12:52:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:03.813 12:52:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:03.813 12:52:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:03.813 12:52:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:03.813 12:52:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:03.813 12:52:43 -- scripts/common.sh@335 -- # IFS=.-: 00:08:03.813 12:52:43 -- scripts/common.sh@335 -- # read -ra ver1 00:08:03.813 12:52:43 -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.813 12:52:43 -- scripts/common.sh@336 -- # read -ra ver2 00:08:03.813 12:52:43 -- scripts/common.sh@337 -- # local 'op=<' 00:08:03.813 12:52:43 -- scripts/common.sh@339 -- # ver1_l=2 00:08:03.813 12:52:43 -- scripts/common.sh@340 -- # ver2_l=1 00:08:03.813 12:52:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:03.813 12:52:43 -- scripts/common.sh@343 -- # case "$op" in 00:08:03.813 12:52:43 -- scripts/common.sh@344 -- # : 1 00:08:03.813 12:52:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:03.813 12:52:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.813 12:52:43 -- scripts/common.sh@364 -- # decimal 1 00:08:03.813 12:52:43 -- scripts/common.sh@352 -- # local d=1 00:08:03.813 12:52:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.813 12:52:43 -- scripts/common.sh@354 -- # echo 1 00:08:03.813 12:52:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:03.813 12:52:43 -- scripts/common.sh@365 -- # decimal 2 00:08:03.813 12:52:43 -- scripts/common.sh@352 -- # local d=2 00:08:03.813 12:52:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.813 12:52:43 -- scripts/common.sh@354 -- # echo 2 00:08:03.813 12:52:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:03.813 12:52:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:03.813 12:52:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:03.813 12:52:43 -- scripts/common.sh@367 -- # return 0 00:08:03.813 12:52:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.813 12:52:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:03.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.813 --rc genhtml_branch_coverage=1 00:08:03.813 --rc genhtml_function_coverage=1 00:08:03.814 --rc genhtml_legend=1 00:08:03.814 --rc geninfo_all_blocks=1 00:08:03.814 --rc geninfo_unexecuted_blocks=1 00:08:03.814 00:08:03.814 ' 00:08:03.814 12:52:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:03.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.814 --rc genhtml_branch_coverage=1 00:08:03.814 --rc genhtml_function_coverage=1 00:08:03.814 --rc genhtml_legend=1 00:08:03.814 --rc geninfo_all_blocks=1 00:08:03.814 --rc geninfo_unexecuted_blocks=1 00:08:03.814 00:08:03.814 ' 00:08:03.814 12:52:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:03.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.814 --rc genhtml_branch_coverage=1 00:08:03.814 --rc genhtml_function_coverage=1 00:08:03.814 --rc genhtml_legend=1 00:08:03.814 --rc geninfo_all_blocks=1 00:08:03.814 --rc geninfo_unexecuted_blocks=1 00:08:03.814 00:08:03.814 ' 00:08:03.814 12:52:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:03.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.814 --rc genhtml_branch_coverage=1 00:08:03.814 --rc genhtml_function_coverage=1 00:08:03.814 --rc genhtml_legend=1 00:08:03.814 --rc geninfo_all_blocks=1 00:08:03.814 --rc geninfo_unexecuted_blocks=1 00:08:03.814 00:08:03.814 ' 00:08:03.814 12:52:43 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:03.814 12:52:43 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:03.814 12:52:43 -- common/autotest_common.sh@34 -- # set -e 00:08:03.814 12:52:43 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:03.814 12:52:43 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:03.814 12:52:43 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:03.814 12:52:43 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:03.814 12:52:43 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:03.814 12:52:43 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:03.814 12:52:43 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:03.814 12:52:43 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:03.814 12:52:43 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:03.814 12:52:43 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:03.814 12:52:43 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:03.814 12:52:43 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:03.814 12:52:43 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:03.814 12:52:43 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:03.814 12:52:43 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:03.814 12:52:43 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:03.814 12:52:43 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:03.814 12:52:43 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:03.814 12:52:43 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:03.814 12:52:43 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:03.814 12:52:43 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:03.814 12:52:43 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:03.814 12:52:43 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:03.814 12:52:43 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:03.814 12:52:43 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:03.814 12:52:43 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:03.814 12:52:43 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:03.814 12:52:43 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:03.814 12:52:43 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:03.814 12:52:43 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:03.814 12:52:43 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:03.814 12:52:43 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:03.814 12:52:43 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:03.814 12:52:43 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:03.814 12:52:43 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:03.814 12:52:43 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:03.814 12:52:43 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:03.814 12:52:43 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:03.814 12:52:43 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:03.814 12:52:43 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:03.814 12:52:43 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:03.814 12:52:43 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:03.814 12:52:43 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:03.814 12:52:43 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:03.814 12:52:43 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:03.814 12:52:43 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:03.814 12:52:43 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:03.814 12:52:43 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:03.814 12:52:43 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:03.814 12:52:43 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:03.814 12:52:43 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:03.814 12:52:43 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:03.814 12:52:43 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:03.814 12:52:43 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:03.814 12:52:43 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:03.814 12:52:43 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:03.814 12:52:43 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:03.814 12:52:43 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:03.814 12:52:43 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:03.814 12:52:43 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:03.814 12:52:43 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:03.814 12:52:43 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:03.814 12:52:43 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:03.814 12:52:43 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:03.814 12:52:43 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:03.814 12:52:43 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:03.814 12:52:43 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:03.814 12:52:43 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:03.814 12:52:43 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:03.814 12:52:43 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:03.814 12:52:43 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:03.814 12:52:43 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:03.814 12:52:43 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:03.814 12:52:43 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:03.814 12:52:43 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:03.814 12:52:43 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:03.814 12:52:43 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:03.814 12:52:43 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:03.814 12:52:43 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:03.814 12:52:43 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:03.814 12:52:43 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:03.814 12:52:43 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:03.814 12:52:43 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:03.814 12:52:43 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:03.814 12:52:43 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:03.814 12:52:43 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:03.814 12:52:43 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:03.814 12:52:43 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:03.814 12:52:43 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:03.814 12:52:43 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:03.814 12:52:43 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:03.814 12:52:43 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:03.814 12:52:43 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:03.814 12:52:43 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:03.814 12:52:43 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:03.814 12:52:43 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:03.814 12:52:43 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:03.814 12:52:43 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:03.814 12:52:43 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:03.814 #define SPDK_CONFIG_H 00:08:03.814 #define SPDK_CONFIG_APPS 1 00:08:03.814 #define SPDK_CONFIG_ARCH native 00:08:03.814 #undef SPDK_CONFIG_ASAN 00:08:03.814 #define SPDK_CONFIG_AVAHI 1 00:08:03.814 #undef SPDK_CONFIG_CET 00:08:03.814 #define SPDK_CONFIG_COVERAGE 1 00:08:03.814 #define SPDK_CONFIG_CROSS_PREFIX 00:08:03.814 #undef SPDK_CONFIG_CRYPTO 00:08:03.814 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:03.814 #undef SPDK_CONFIG_CUSTOMOCF 00:08:03.814 #undef SPDK_CONFIG_DAOS 00:08:03.814 #define SPDK_CONFIG_DAOS_DIR 00:08:03.814 #define SPDK_CONFIG_DEBUG 1 00:08:03.814 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:03.814 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:03.814 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:03.814 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:03.814 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:03.814 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:03.814 #define SPDK_CONFIG_EXAMPLES 1 00:08:03.814 #undef SPDK_CONFIG_FC 00:08:03.814 #define SPDK_CONFIG_FC_PATH 00:08:03.814 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:03.814 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:03.814 #undef SPDK_CONFIG_FUSE 00:08:03.814 #undef SPDK_CONFIG_FUZZER 00:08:03.814 #define SPDK_CONFIG_FUZZER_LIB 00:08:03.814 #define SPDK_CONFIG_GOLANG 1 00:08:03.814 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:03.814 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:03.814 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:03.814 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:03.814 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:03.814 #define SPDK_CONFIG_IDXD 1 00:08:03.814 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:03.814 #undef SPDK_CONFIG_IPSEC_MB 00:08:03.814 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:03.814 #define SPDK_CONFIG_ISAL 1 00:08:03.814 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:03.814 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:03.814 #define SPDK_CONFIG_LIBDIR 00:08:03.815 #undef SPDK_CONFIG_LTO 00:08:03.815 #define SPDK_CONFIG_MAX_LCORES 00:08:03.815 #define SPDK_CONFIG_NVME_CUSE 1 00:08:03.815 #undef SPDK_CONFIG_OCF 00:08:03.815 #define SPDK_CONFIG_OCF_PATH 00:08:03.815 #define SPDK_CONFIG_OPENSSL_PATH 00:08:03.815 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:03.815 #undef SPDK_CONFIG_PGO_USE 00:08:03.815 #define SPDK_CONFIG_PREFIX /usr/local 00:08:03.815 #undef SPDK_CONFIG_RAID5F 00:08:03.815 #undef SPDK_CONFIG_RBD 00:08:03.815 #define SPDK_CONFIG_RDMA 1 00:08:03.815 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:03.815 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:03.815 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:03.815 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:03.815 #define SPDK_CONFIG_SHARED 1 00:08:03.815 #undef SPDK_CONFIG_SMA 00:08:03.815 #define SPDK_CONFIG_TESTS 1 00:08:03.815 #undef SPDK_CONFIG_TSAN 00:08:03.815 #define SPDK_CONFIG_UBLK 1 00:08:03.815 #define SPDK_CONFIG_UBSAN 1 00:08:03.815 #undef SPDK_CONFIG_UNIT_TESTS 00:08:03.815 #undef SPDK_CONFIG_URING 00:08:03.815 #define SPDK_CONFIG_URING_PATH 00:08:03.815 #undef SPDK_CONFIG_URING_ZNS 00:08:03.815 #define SPDK_CONFIG_USDT 1 00:08:03.815 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:03.815 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:03.815 #undef SPDK_CONFIG_VFIO_USER 00:08:03.815 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:03.815 #define SPDK_CONFIG_VHOST 1 00:08:03.815 #define SPDK_CONFIG_VIRTIO 1 00:08:03.815 #undef SPDK_CONFIG_VTUNE 00:08:03.815 #define SPDK_CONFIG_VTUNE_DIR 00:08:03.815 #define SPDK_CONFIG_WERROR 1 00:08:03.815 #define SPDK_CONFIG_WPDK_DIR 00:08:03.815 #undef SPDK_CONFIG_XNVME 00:08:03.815 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:03.815 12:52:43 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:03.815 12:52:43 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.815 12:52:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.815 12:52:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.815 12:52:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.815 12:52:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.815 12:52:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.815 12:52:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.815 12:52:43 -- paths/export.sh@5 -- # export PATH 00:08:03.815 12:52:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.815 12:52:43 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:03.815 12:52:43 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:03.815 12:52:43 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:03.815 12:52:43 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:03.815 12:52:43 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:03.815 12:52:43 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:03.815 12:52:43 -- pm/common@16 -- # TEST_TAG=N/A 00:08:03.815 12:52:43 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:03.815 12:52:43 -- common/autotest_common.sh@52 -- # : 1 00:08:03.815 12:52:43 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:03.815 12:52:43 -- common/autotest_common.sh@56 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:03.815 12:52:43 -- common/autotest_common.sh@58 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:03.815 12:52:43 -- common/autotest_common.sh@60 -- # : 1 00:08:03.815 12:52:43 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:03.815 12:52:43 -- common/autotest_common.sh@62 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:03.815 12:52:43 -- common/autotest_common.sh@64 -- # : 00:08:03.815 12:52:43 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:03.815 12:52:43 -- common/autotest_common.sh@66 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:03.815 12:52:43 -- common/autotest_common.sh@68 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:03.815 12:52:43 -- common/autotest_common.sh@70 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:03.815 12:52:43 -- common/autotest_common.sh@72 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:03.815 12:52:43 -- common/autotest_common.sh@74 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:03.815 12:52:43 -- common/autotest_common.sh@76 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:03.815 12:52:43 -- common/autotest_common.sh@78 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:03.815 12:52:43 -- common/autotest_common.sh@80 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:03.815 12:52:43 -- common/autotest_common.sh@82 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:03.815 12:52:43 -- common/autotest_common.sh@84 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:03.815 12:52:43 -- common/autotest_common.sh@86 -- # : 1 00:08:03.815 12:52:43 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:03.815 12:52:43 -- common/autotest_common.sh@88 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:03.815 12:52:43 -- common/autotest_common.sh@90 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:03.815 12:52:43 -- common/autotest_common.sh@92 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:03.815 12:52:43 -- common/autotest_common.sh@94 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:03.815 12:52:43 -- common/autotest_common.sh@96 -- # : tcp 00:08:03.815 12:52:43 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:03.815 12:52:43 -- common/autotest_common.sh@98 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:03.815 12:52:43 -- common/autotest_common.sh@100 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:03.815 12:52:43 -- common/autotest_common.sh@102 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:03.815 12:52:43 -- common/autotest_common.sh@104 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:03.815 12:52:43 -- common/autotest_common.sh@106 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:03.815 12:52:43 -- common/autotest_common.sh@108 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:03.815 12:52:43 -- common/autotest_common.sh@110 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:03.815 12:52:43 -- common/autotest_common.sh@112 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:03.815 12:52:43 -- common/autotest_common.sh@114 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:03.815 12:52:43 -- common/autotest_common.sh@116 -- # : 1 00:08:03.815 12:52:43 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:03.815 12:52:43 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:03.815 12:52:43 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:03.815 12:52:43 -- common/autotest_common.sh@120 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:03.815 12:52:43 -- common/autotest_common.sh@122 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:03.815 12:52:43 -- common/autotest_common.sh@124 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:03.815 12:52:43 -- common/autotest_common.sh@126 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:03.815 12:52:43 -- common/autotest_common.sh@128 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:03.815 12:52:43 -- common/autotest_common.sh@130 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:03.815 12:52:43 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:03.815 12:52:43 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:03.815 12:52:43 -- common/autotest_common.sh@134 -- # : true 00:08:03.815 12:52:43 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:03.815 12:52:43 -- common/autotest_common.sh@136 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:03.815 12:52:43 -- common/autotest_common.sh@138 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:03.815 12:52:43 -- common/autotest_common.sh@140 -- # : 1 00:08:03.815 12:52:43 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:03.815 12:52:43 -- common/autotest_common.sh@142 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:03.815 12:52:43 -- common/autotest_common.sh@144 -- # : 0 00:08:03.815 12:52:43 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:03.815 12:52:43 -- common/autotest_common.sh@146 -- # : 0 00:08:03.816 12:52:43 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:03.816 12:52:43 -- common/autotest_common.sh@148 -- # : 00:08:03.816 12:52:43 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:03.816 12:52:43 -- common/autotest_common.sh@150 -- # : 0 00:08:03.816 12:52:43 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:03.816 12:52:43 -- common/autotest_common.sh@152 -- # : 0 00:08:03.816 12:52:43 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:03.816 12:52:43 -- common/autotest_common.sh@154 -- # : 0 00:08:03.816 12:52:43 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:03.816 12:52:43 -- common/autotest_common.sh@156 -- # : 0 00:08:03.816 12:52:43 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:03.816 12:52:43 -- common/autotest_common.sh@158 -- # : 0 00:08:03.816 12:52:43 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:03.816 12:52:43 -- common/autotest_common.sh@160 -- # : 0 00:08:03.816 12:52:43 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:03.816 12:52:43 -- common/autotest_common.sh@163 -- # : 00:08:03.816 12:52:43 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:03.816 12:52:43 -- common/autotest_common.sh@165 -- # : 1 00:08:03.816 12:52:43 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:03.816 12:52:43 -- common/autotest_common.sh@167 -- # : 1 00:08:03.816 12:52:43 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:03.816 12:52:43 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:03.816 12:52:43 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:03.816 12:52:43 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:03.816 12:52:43 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:03.816 12:52:43 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:03.816 12:52:43 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:03.816 12:52:43 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:03.816 12:52:43 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:03.816 12:52:43 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.816 12:52:43 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.816 12:52:43 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:03.816 12:52:43 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:03.816 12:52:43 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:03.816 12:52:43 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:03.816 12:52:43 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.816 12:52:43 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.816 12:52:43 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.816 12:52:43 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.816 12:52:43 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:03.816 12:52:43 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:03.816 12:52:43 -- common/autotest_common.sh@196 -- # cat 00:08:03.816 12:52:43 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:03.816 12:52:43 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:03.816 12:52:43 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:03.816 12:52:43 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:03.816 12:52:43 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:03.816 12:52:43 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:03.816 12:52:43 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:03.816 12:52:43 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:03.816 12:52:43 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:03.816 12:52:43 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:03.816 12:52:43 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:03.816 12:52:43 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:03.816 12:52:43 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:03.816 12:52:43 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:03.816 12:52:43 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:03.816 12:52:43 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:03.816 12:52:43 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:03.816 12:52:43 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:03.816 12:52:43 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:03.816 12:52:43 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:03.816 12:52:43 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:03.816 12:52:43 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:03.816 12:52:43 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:03.816 12:52:43 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:03.816 12:52:43 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:03.816 12:52:43 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:03.816 12:52:43 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:03.816 12:52:43 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:03.816 12:52:43 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:03.816 12:52:43 -- common/autotest_common.sh@259 -- # valgrind= 00:08:03.816 12:52:43 -- common/autotest_common.sh@265 -- # uname -s 00:08:03.816 12:52:43 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:03.816 12:52:43 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:03.816 12:52:43 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:03.816 12:52:43 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:03.816 12:52:43 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:03.816 12:52:43 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:03.816 12:52:43 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:03.816 12:52:43 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:08:03.816 12:52:43 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:03.816 12:52:43 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:03.816 12:52:43 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:03.816 12:52:43 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:03.816 12:52:43 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:03.816 12:52:43 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:03.816 12:52:43 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:03.816 12:52:43 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:08:03.816 12:52:43 -- common/autotest_common.sh@319 -- # [[ -z 71969 ]] 00:08:03.816 12:52:43 -- common/autotest_common.sh@319 -- # kill -0 71969 00:08:03.816 12:52:43 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:03.816 12:52:43 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:03.816 12:52:43 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:03.816 12:52:43 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:03.816 12:52:43 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:03.816 12:52:43 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:03.816 12:52:43 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:03.816 12:52:43 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:03.816 12:52:43 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.KGx6tH 00:08:03.816 12:52:43 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:03.816 12:52:43 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:03.816 12:52:43 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:03.816 12:52:43 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.KGx6tH/tests/target /tmp/spdk.KGx6tH 00:08:03.816 12:52:43 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:03.816 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.816 12:52:43 -- common/autotest_common.sh@328 -- # df -T 00:08:03.816 12:52:43 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:03.816 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:03.816 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:03.816 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=13431681024 00:08:03.816 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:03.816 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=6150262784 00:08:03.816 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.816 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:08:03.816 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:03.816 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:08:03.816 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:08:03.816 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:03.816 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265167872 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:08:03.817 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:03.817 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:08:03.817 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:08:03.817 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=13431681024 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:03.817 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=6150262784 00:08:03.817 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266290176 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:08:03.817 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:08:03.817 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:08:03.817 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:08:03.817 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:08:03.817 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:08:03.817 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:08:03.817 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:03.817 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:08:03.817 12:52:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=97248874496 00:08:03.817 12:52:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:08:03.817 12:52:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=2453905408 00:08:03.817 12:52:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:03.817 12:52:43 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:03.817 * Looking for test storage... 00:08:03.817 12:52:43 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:03.817 12:52:43 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:03.817 12:52:43 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.817 12:52:43 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:03.817 12:52:43 -- common/autotest_common.sh@373 -- # mount=/home 00:08:03.817 12:52:43 -- common/autotest_common.sh@375 -- # target_space=13431681024 00:08:03.817 12:52:43 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:03.817 12:52:43 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:03.817 12:52:43 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:08:03.817 12:52:43 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:08:03.817 12:52:43 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:08:03.817 12:52:43 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.817 12:52:43 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.817 12:52:43 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.817 12:52:43 -- common/autotest_common.sh@390 -- # return 0 00:08:03.817 12:52:43 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:03.817 12:52:43 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:03.817 12:52:43 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:03.817 12:52:43 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:03.817 12:52:43 -- common/autotest_common.sh@1682 -- # true 00:08:03.817 12:52:43 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:03.817 12:52:43 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:03.817 12:52:43 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:03.817 12:52:43 -- common/autotest_common.sh@27 -- # exec 00:08:03.817 12:52:43 -- common/autotest_common.sh@29 -- # exec 00:08:03.817 12:52:43 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:03.817 12:52:43 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:03.817 12:52:43 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:03.817 12:52:43 -- common/autotest_common.sh@18 -- # set -x 00:08:03.817 12:52:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:03.817 12:52:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:03.817 12:52:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:03.817 12:52:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:03.817 12:52:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:03.817 12:52:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:03.817 12:52:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:03.817 12:52:43 -- scripts/common.sh@335 -- # IFS=.-: 00:08:03.817 12:52:43 -- scripts/common.sh@335 -- # read -ra ver1 00:08:03.817 12:52:43 -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.817 12:52:43 -- scripts/common.sh@336 -- # read -ra ver2 00:08:03.817 12:52:43 -- scripts/common.sh@337 -- # local 'op=<' 00:08:03.817 12:52:43 -- scripts/common.sh@339 -- # ver1_l=2 00:08:03.817 12:52:43 -- scripts/common.sh@340 -- # ver2_l=1 00:08:03.817 12:52:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:03.817 12:52:43 -- scripts/common.sh@343 -- # case "$op" in 00:08:03.817 12:52:43 -- scripts/common.sh@344 -- # : 1 00:08:03.817 12:52:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:03.817 12:52:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.817 12:52:43 -- scripts/common.sh@364 -- # decimal 1 00:08:03.817 12:52:43 -- scripts/common.sh@352 -- # local d=1 00:08:03.817 12:52:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.817 12:52:43 -- scripts/common.sh@354 -- # echo 1 00:08:03.817 12:52:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:03.817 12:52:43 -- scripts/common.sh@365 -- # decimal 2 00:08:03.817 12:52:43 -- scripts/common.sh@352 -- # local d=2 00:08:03.817 12:52:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.817 12:52:43 -- scripts/common.sh@354 -- # echo 2 00:08:03.817 12:52:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:03.817 12:52:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:03.817 12:52:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:03.817 12:52:43 -- scripts/common.sh@367 -- # return 0 00:08:03.817 12:52:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.817 12:52:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:03.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.817 --rc genhtml_branch_coverage=1 00:08:03.817 --rc genhtml_function_coverage=1 00:08:03.817 --rc genhtml_legend=1 00:08:03.817 --rc geninfo_all_blocks=1 00:08:03.817 --rc geninfo_unexecuted_blocks=1 00:08:03.817 00:08:03.817 ' 00:08:03.817 12:52:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:03.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.817 --rc genhtml_branch_coverage=1 00:08:03.817 --rc genhtml_function_coverage=1 00:08:03.817 --rc genhtml_legend=1 00:08:03.817 --rc geninfo_all_blocks=1 00:08:03.817 --rc geninfo_unexecuted_blocks=1 00:08:03.817 00:08:03.817 ' 00:08:03.817 12:52:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:03.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.818 --rc genhtml_branch_coverage=1 00:08:03.818 --rc genhtml_function_coverage=1 00:08:03.818 --rc genhtml_legend=1 00:08:03.818 --rc geninfo_all_blocks=1 00:08:03.818 --rc geninfo_unexecuted_blocks=1 00:08:03.818 00:08:03.818 ' 00:08:03.818 12:52:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:03.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.818 --rc genhtml_branch_coverage=1 00:08:03.818 --rc genhtml_function_coverage=1 00:08:03.818 --rc genhtml_legend=1 00:08:03.818 --rc geninfo_all_blocks=1 00:08:03.818 --rc geninfo_unexecuted_blocks=1 00:08:03.818 00:08:03.818 ' 00:08:03.818 12:52:43 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:03.818 12:52:43 -- nvmf/common.sh@7 -- # uname -s 00:08:03.818 12:52:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.818 12:52:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.818 12:52:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.818 12:52:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.818 12:52:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.818 12:52:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.818 12:52:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.818 12:52:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.818 12:52:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.818 12:52:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.818 12:52:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:08:03.818 12:52:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:08:03.818 12:52:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.818 12:52:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.818 12:52:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:03.818 12:52:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.818 12:52:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.818 12:52:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.818 12:52:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.818 12:52:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.818 12:52:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.818 12:52:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.818 12:52:43 -- paths/export.sh@5 -- # export PATH 00:08:03.818 12:52:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.818 12:52:43 -- nvmf/common.sh@46 -- # : 0 00:08:03.818 12:52:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:03.818 12:52:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:03.818 12:52:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:03.818 12:52:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.818 12:52:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.818 12:52:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:03.818 12:52:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:03.818 12:52:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:03.818 12:52:43 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:03.818 12:52:43 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:03.818 12:52:43 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:03.818 12:52:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:03.818 12:52:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.818 12:52:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:03.818 12:52:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:03.818 12:52:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:03.818 12:52:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.818 12:52:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.818 12:52:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.818 12:52:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:03.818 12:52:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:03.818 12:52:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:03.818 12:52:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:03.818 12:52:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:03.818 12:52:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:03.818 12:52:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.818 12:52:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.818 12:52:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:03.818 12:52:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:03.818 12:52:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:03.818 12:52:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:03.818 12:52:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:03.818 12:52:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.818 12:52:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:03.818 12:52:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:03.818 12:52:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:03.818 12:52:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:03.818 12:52:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:03.818 12:52:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:03.818 Cannot find device "nvmf_tgt_br" 00:08:03.818 12:52:43 -- nvmf/common.sh@154 -- # true 00:08:03.818 12:52:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:03.818 Cannot find device "nvmf_tgt_br2" 00:08:03.818 12:52:43 -- nvmf/common.sh@155 -- # true 00:08:03.818 12:52:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:03.818 12:52:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:03.818 Cannot find device "nvmf_tgt_br" 00:08:03.818 12:52:43 -- nvmf/common.sh@157 -- # true 00:08:03.818 12:52:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:03.818 Cannot find device "nvmf_tgt_br2" 00:08:03.818 12:52:43 -- nvmf/common.sh@158 -- # true 00:08:03.818 12:52:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:03.818 12:52:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:03.818 12:52:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:03.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.818 12:52:43 -- nvmf/common.sh@161 -- # true 00:08:03.818 12:52:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:03.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.818 12:52:43 -- nvmf/common.sh@162 -- # true 00:08:03.818 12:52:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:03.818 12:52:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:03.818 12:52:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:03.818 12:52:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:03.818 12:52:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:03.818 12:52:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:03.818 12:52:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:03.818 12:52:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:03.818 12:52:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:03.818 12:52:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:03.818 12:52:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:03.818 12:52:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:03.818 12:52:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:03.818 12:52:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:03.818 12:52:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:03.818 12:52:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:03.818 12:52:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:03.818 12:52:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:03.818 12:52:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:03.818 12:52:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:03.818 12:52:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:03.818 12:52:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:03.818 12:52:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:03.818 12:52:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:03.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:03.818 00:08:03.818 --- 10.0.0.2 ping statistics --- 00:08:03.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.818 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:03.818 12:52:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:03.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:03.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:08:03.818 00:08:03.818 --- 10.0.0.3 ping statistics --- 00:08:03.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.818 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:03.818 12:52:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:03.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:03.819 00:08:03.819 --- 10.0.0.1 ping statistics --- 00:08:03.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.819 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:03.819 12:52:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.819 12:52:43 -- nvmf/common.sh@421 -- # return 0 00:08:03.819 12:52:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:03.819 12:52:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.819 12:52:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:03.819 12:52:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:03.819 12:52:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.819 12:52:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:03.819 12:52:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:03.819 12:52:43 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:03.819 12:52:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:03.819 12:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.819 12:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.819 ************************************ 00:08:03.819 START TEST nvmf_filesystem_no_in_capsule 00:08:03.819 ************************************ 00:08:03.819 12:52:43 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:03.819 12:52:43 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:03.819 12:52:43 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:03.819 12:52:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:03.819 12:52:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.819 12:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.819 12:52:43 -- nvmf/common.sh@469 -- # nvmfpid=72148 00:08:03.819 12:52:43 -- nvmf/common.sh@470 -- # waitforlisten 72148 00:08:03.819 12:52:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.819 12:52:43 -- common/autotest_common.sh@829 -- # '[' -z 72148 ']' 00:08:03.819 12:52:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.819 12:52:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.819 12:52:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.819 12:52:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.819 12:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.819 [2024-12-13 12:52:43.811598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:03.819 [2024-12-13 12:52:43.811683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.819 [2024-12-13 12:52:43.950609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.819 [2024-12-13 12:52:44.008942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:03.819 [2024-12-13 12:52:44.009065] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.819 [2024-12-13 12:52:44.009077] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.819 [2024-12-13 12:52:44.009084] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.819 [2024-12-13 12:52:44.009233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.819 [2024-12-13 12:52:44.009805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.819 [2024-12-13 12:52:44.010322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.819 [2024-12-13 12:52:44.010369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.077 12:52:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.077 12:52:44 -- common/autotest_common.sh@862 -- # return 0 00:08:04.077 12:52:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:04.077 12:52:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.077 12:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.336 12:52:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.336 12:52:44 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:04.336 12:52:44 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:04.336 12:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.336 12:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.336 [2024-12-13 12:52:44.884215] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.336 12:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.336 12:52:44 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:04.336 12:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.336 12:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.336 Malloc1 00:08:04.336 12:52:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.336 12:52:45 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.336 12:52:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.336 12:52:45 -- common/autotest_common.sh@10 -- # set +x 00:08:04.336 12:52:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.336 12:52:45 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:04.336 12:52:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.336 12:52:45 -- common/autotest_common.sh@10 -- # set +x 00:08:04.336 12:52:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.336 12:52:45 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.336 12:52:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.336 12:52:45 -- common/autotest_common.sh@10 -- # set +x 00:08:04.336 [2024-12-13 12:52:45.064618] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.336 12:52:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.336 12:52:45 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:04.336 12:52:45 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:04.336 12:52:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:04.336 12:52:45 -- common/autotest_common.sh@1369 -- # local bs 00:08:04.336 12:52:45 -- common/autotest_common.sh@1370 -- # local nb 00:08:04.336 12:52:45 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:04.336 12:52:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.336 12:52:45 -- common/autotest_common.sh@10 -- # set +x 00:08:04.336 12:52:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.336 12:52:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:04.336 { 00:08:04.336 "aliases": [ 00:08:04.336 "dcf58d1c-d35b-449c-9dd2-be6b6a7f777e" 00:08:04.336 ], 00:08:04.336 "assigned_rate_limits": { 00:08:04.336 "r_mbytes_per_sec": 0, 00:08:04.336 "rw_ios_per_sec": 0, 00:08:04.336 "rw_mbytes_per_sec": 0, 00:08:04.336 "w_mbytes_per_sec": 0 00:08:04.336 }, 00:08:04.336 "block_size": 512, 00:08:04.336 "claim_type": "exclusive_write", 00:08:04.336 "claimed": true, 00:08:04.336 "driver_specific": {}, 00:08:04.336 "memory_domains": [ 00:08:04.336 { 00:08:04.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.336 "dma_device_type": 2 00:08:04.336 } 00:08:04.336 ], 00:08:04.336 "name": "Malloc1", 00:08:04.336 "num_blocks": 1048576, 00:08:04.336 "product_name": "Malloc disk", 00:08:04.336 "supported_io_types": { 00:08:04.336 "abort": true, 00:08:04.336 "compare": false, 00:08:04.336 "compare_and_write": false, 00:08:04.336 "flush": true, 00:08:04.336 "nvme_admin": false, 00:08:04.336 "nvme_io": false, 00:08:04.336 "read": true, 00:08:04.336 "reset": true, 00:08:04.336 "unmap": true, 00:08:04.336 "write": true, 00:08:04.336 "write_zeroes": true 00:08:04.336 }, 00:08:04.336 "uuid": "dcf58d1c-d35b-449c-9dd2-be6b6a7f777e", 00:08:04.336 "zoned": false 00:08:04.336 } 00:08:04.336 ]' 00:08:04.336 12:52:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:04.594 12:52:45 -- common/autotest_common.sh@1372 -- # bs=512 00:08:04.594 12:52:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:04.594 12:52:45 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:04.594 12:52:45 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:04.594 12:52:45 -- common/autotest_common.sh@1377 -- # echo 512 00:08:04.594 12:52:45 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:04.594 12:52:45 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:04.852 12:52:45 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.852 12:52:45 -- common/autotest_common.sh@1187 -- # local i=0 00:08:04.852 12:52:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.852 12:52:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:04.852 12:52:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:06.754 12:52:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:06.754 12:52:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:06.754 12:52:47 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.754 12:52:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:06.754 12:52:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.754 12:52:47 -- common/autotest_common.sh@1197 -- # return 0 00:08:06.754 12:52:47 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:06.754 12:52:47 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:06.754 12:52:47 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:06.754 12:52:47 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:06.754 12:52:47 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:06.754 12:52:47 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:06.754 12:52:47 -- setup/common.sh@80 -- # echo 536870912 00:08:06.754 12:52:47 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:06.754 12:52:47 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:06.754 12:52:47 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:06.754 12:52:47 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:06.754 12:52:47 -- target/filesystem.sh@69 -- # partprobe 00:08:07.013 12:52:47 -- target/filesystem.sh@70 -- # sleep 1 00:08:07.948 12:52:48 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:07.948 12:52:48 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:07.948 12:52:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:07.948 12:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.948 12:52:48 -- common/autotest_common.sh@10 -- # set +x 00:08:07.948 ************************************ 00:08:07.948 START TEST filesystem_ext4 00:08:07.948 ************************************ 00:08:07.948 12:52:48 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:07.948 12:52:48 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:07.948 12:52:48 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.948 12:52:48 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:07.948 12:52:48 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:07.948 12:52:48 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:07.948 12:52:48 -- common/autotest_common.sh@914 -- # local i=0 00:08:07.948 12:52:48 -- common/autotest_common.sh@915 -- # local force 00:08:07.948 12:52:48 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:07.948 12:52:48 -- common/autotest_common.sh@918 -- # force=-F 00:08:07.948 12:52:48 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:07.948 mke2fs 1.47.0 (5-Feb-2023) 00:08:07.948 Discarding device blocks: 0/522240 done 00:08:07.948 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:07.948 Filesystem UUID: f7d463ab-a966-4723-aa94-074ebc5b17aa 00:08:07.948 Superblock backups stored on blocks: 00:08:07.948 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:07.948 00:08:07.948 Allocating group tables: 0/64 done 00:08:07.948 Writing inode tables: 0/64 done 00:08:07.948 Creating journal (8192 blocks): done 00:08:07.948 Writing superblocks and filesystem accounting information: 0/64 done 00:08:07.948 00:08:07.948 12:52:48 -- common/autotest_common.sh@931 -- # return 0 00:08:07.948 12:52:48 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.510 12:52:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.510 12:52:54 -- target/filesystem.sh@25 -- # sync 00:08:14.510 12:52:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.510 12:52:54 -- target/filesystem.sh@27 -- # sync 00:08:14.510 12:52:54 -- target/filesystem.sh@29 -- # i=0 00:08:14.510 12:52:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.510 12:52:54 -- target/filesystem.sh@37 -- # kill -0 72148 00:08:14.510 12:52:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.510 12:52:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.510 12:52:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.510 12:52:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.510 ************************************ 00:08:14.510 END TEST filesystem_ext4 00:08:14.510 ************************************ 00:08:14.510 00:08:14.510 real 0m5.643s 00:08:14.510 user 0m0.017s 00:08:14.510 sys 0m0.075s 00:08:14.510 12:52:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.510 12:52:54 -- common/autotest_common.sh@10 -- # set +x 00:08:14.510 12:52:54 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:14.510 12:52:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:14.510 12:52:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.510 12:52:54 -- common/autotest_common.sh@10 -- # set +x 00:08:14.510 ************************************ 00:08:14.510 START TEST filesystem_btrfs 00:08:14.510 ************************************ 00:08:14.510 12:52:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:14.510 12:52:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:14.510 12:52:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.510 12:52:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:14.510 12:52:54 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:14.510 12:52:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:14.510 12:52:54 -- common/autotest_common.sh@914 -- # local i=0 00:08:14.510 12:52:54 -- common/autotest_common.sh@915 -- # local force 00:08:14.510 12:52:54 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:14.510 12:52:54 -- common/autotest_common.sh@920 -- # force=-f 00:08:14.510 12:52:54 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:14.510 btrfs-progs v6.8.1 00:08:14.510 See https://btrfs.readthedocs.io for more information. 00:08:14.510 00:08:14.510 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:14.510 NOTE: several default settings have changed in version 5.15, please make sure 00:08:14.510 this does not affect your deployments: 00:08:14.510 - DUP for metadata (-m dup) 00:08:14.510 - enabled no-holes (-O no-holes) 00:08:14.510 - enabled free-space-tree (-R free-space-tree) 00:08:14.510 00:08:14.510 Label: (null) 00:08:14.510 UUID: 98050fa6-405d-457d-8d56-5dc6463ec870 00:08:14.510 Node size: 16384 00:08:14.510 Sector size: 4096 (CPU page size: 4096) 00:08:14.510 Filesystem size: 510.00MiB 00:08:14.510 Block group profiles: 00:08:14.510 Data: single 8.00MiB 00:08:14.510 Metadata: DUP 32.00MiB 00:08:14.510 System: DUP 8.00MiB 00:08:14.510 SSD detected: yes 00:08:14.510 Zoned device: no 00:08:14.510 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:14.510 Checksum: crc32c 00:08:14.510 Number of devices: 1 00:08:14.510 Devices: 00:08:14.510 ID SIZE PATH 00:08:14.510 1 510.00MiB /dev/nvme0n1p1 00:08:14.510 00:08:14.510 12:52:54 -- common/autotest_common.sh@931 -- # return 0 00:08:14.510 12:52:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.510 12:52:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.510 12:52:54 -- target/filesystem.sh@25 -- # sync 00:08:14.510 12:52:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.510 12:52:54 -- target/filesystem.sh@27 -- # sync 00:08:14.510 12:52:54 -- target/filesystem.sh@29 -- # i=0 00:08:14.510 12:52:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.510 12:52:54 -- target/filesystem.sh@37 -- # kill -0 72148 00:08:14.510 12:52:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.510 12:52:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.510 12:52:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.510 12:52:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.510 ************************************ 00:08:14.510 END TEST filesystem_btrfs 00:08:14.510 ************************************ 00:08:14.510 00:08:14.510 real 0m0.227s 00:08:14.510 user 0m0.021s 00:08:14.510 sys 0m0.063s 00:08:14.510 12:52:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.510 12:52:54 -- common/autotest_common.sh@10 -- # set +x 00:08:14.510 12:52:54 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:14.510 12:52:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:14.510 12:52:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.510 12:52:54 -- common/autotest_common.sh@10 -- # set +x 00:08:14.510 ************************************ 00:08:14.510 START TEST filesystem_xfs 00:08:14.510 ************************************ 00:08:14.510 12:52:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:14.510 12:52:54 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:14.510 12:52:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.510 12:52:54 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:14.510 12:52:54 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:14.510 12:52:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:14.510 12:52:54 -- common/autotest_common.sh@914 -- # local i=0 00:08:14.510 12:52:54 -- common/autotest_common.sh@915 -- # local force 00:08:14.510 12:52:54 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:14.510 12:52:54 -- common/autotest_common.sh@920 -- # force=-f 00:08:14.510 12:52:54 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:14.510 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:14.510 = sectsz=512 attr=2, projid32bit=1 00:08:14.510 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:14.510 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:14.510 data = bsize=4096 blocks=130560, imaxpct=25 00:08:14.510 = sunit=0 swidth=0 blks 00:08:14.510 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:14.510 log =internal log bsize=4096 blocks=16384, version=2 00:08:14.510 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:14.510 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:14.768 Discarding blocks...Done. 00:08:14.768 12:52:55 -- common/autotest_common.sh@931 -- # return 0 00:08:14.768 12:52:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.298 12:52:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.298 12:52:57 -- target/filesystem.sh@25 -- # sync 00:08:17.298 12:52:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.298 12:52:57 -- target/filesystem.sh@27 -- # sync 00:08:17.298 12:52:57 -- target/filesystem.sh@29 -- # i=0 00:08:17.298 12:52:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.298 12:52:57 -- target/filesystem.sh@37 -- # kill -0 72148 00:08:17.298 12:52:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.298 12:52:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.298 12:52:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.298 12:52:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.298 ************************************ 00:08:17.298 END TEST filesystem_xfs 00:08:17.298 ************************************ 00:08:17.298 00:08:17.298 real 0m3.120s 00:08:17.298 user 0m0.022s 00:08:17.298 sys 0m0.058s 00:08:17.298 12:52:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.298 12:52:57 -- common/autotest_common.sh@10 -- # set +x 00:08:17.298 12:52:57 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:17.298 12:52:57 -- target/filesystem.sh@93 -- # sync 00:08:17.298 12:52:57 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.298 12:52:57 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.298 12:52:57 -- common/autotest_common.sh@1208 -- # local i=0 00:08:17.298 12:52:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:17.298 12:52:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.298 12:52:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:17.298 12:52:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.299 12:52:57 -- common/autotest_common.sh@1220 -- # return 0 00:08:17.299 12:52:57 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.299 12:52:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.299 12:52:57 -- common/autotest_common.sh@10 -- # set +x 00:08:17.299 12:52:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.299 12:52:57 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:17.299 12:52:57 -- target/filesystem.sh@101 -- # killprocess 72148 00:08:17.299 12:52:57 -- common/autotest_common.sh@936 -- # '[' -z 72148 ']' 00:08:17.299 12:52:57 -- common/autotest_common.sh@940 -- # kill -0 72148 00:08:17.299 12:52:57 -- common/autotest_common.sh@941 -- # uname 00:08:17.299 12:52:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:17.299 12:52:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72148 00:08:17.299 killing process with pid 72148 00:08:17.299 12:52:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:17.299 12:52:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:17.299 12:52:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72148' 00:08:17.299 12:52:57 -- common/autotest_common.sh@955 -- # kill 72148 00:08:17.299 12:52:57 -- common/autotest_common.sh@960 -- # wait 72148 00:08:17.557 12:52:58 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:17.557 00:08:17.557 real 0m14.461s 00:08:17.557 user 0m55.645s 00:08:17.557 sys 0m1.918s 00:08:17.557 12:52:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.557 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:08:17.557 ************************************ 00:08:17.557 END TEST nvmf_filesystem_no_in_capsule 00:08:17.557 ************************************ 00:08:17.557 12:52:58 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:17.557 12:52:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.557 12:52:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.557 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:08:17.557 ************************************ 00:08:17.558 START TEST nvmf_filesystem_in_capsule 00:08:17.558 ************************************ 00:08:17.558 12:52:58 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:17.558 12:52:58 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:17.558 12:52:58 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:17.558 12:52:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:17.558 12:52:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.558 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:08:17.558 12:52:58 -- nvmf/common.sh@469 -- # nvmfpid=72520 00:08:17.558 12:52:58 -- nvmf/common.sh@470 -- # waitforlisten 72520 00:08:17.558 12:52:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.558 12:52:58 -- common/autotest_common.sh@829 -- # '[' -z 72520 ']' 00:08:17.558 12:52:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.558 12:52:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.558 12:52:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.558 12:52:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.558 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:08:17.558 [2024-12-13 12:52:58.323682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:17.558 [2024-12-13 12:52:58.323817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.817 [2024-12-13 12:52:58.464572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.817 [2024-12-13 12:52:58.521139] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.817 [2024-12-13 12:52:58.521567] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.817 [2024-12-13 12:52:58.521587] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.817 [2024-12-13 12:52:58.521595] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.817 [2024-12-13 12:52:58.521701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.817 [2024-12-13 12:52:58.521886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.817 [2024-12-13 12:52:58.522140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.817 [2024-12-13 12:52:58.522144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.753 12:52:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.753 12:52:59 -- common/autotest_common.sh@862 -- # return 0 00:08:18.753 12:52:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:18.753 12:52:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.753 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 12:52:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.753 12:52:59 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:18.753 12:52:59 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:18.753 12:52:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.753 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 [2024-12-13 12:52:59.278412] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.753 12:52:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.753 12:52:59 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:18.753 12:52:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.753 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 Malloc1 00:08:18.753 12:52:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.753 12:52:59 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.753 12:52:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.753 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 12:52:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.753 12:52:59 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:18.753 12:52:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.753 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 12:52:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.753 12:52:59 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.753 12:52:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.753 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 [2024-12-13 12:52:59.451531] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.753 12:52:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.753 12:52:59 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:18.753 12:52:59 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:18.753 12:52:59 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:18.753 12:52:59 -- common/autotest_common.sh@1369 -- # local bs 00:08:18.753 12:52:59 -- common/autotest_common.sh@1370 -- # local nb 00:08:18.753 12:52:59 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:18.753 12:52:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.753 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 12:52:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.753 12:52:59 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:18.753 { 00:08:18.753 "aliases": [ 00:08:18.753 "8ffe77cf-90a2-4bef-821a-f89962bbc263" 00:08:18.753 ], 00:08:18.753 "assigned_rate_limits": { 00:08:18.753 "r_mbytes_per_sec": 0, 00:08:18.753 "rw_ios_per_sec": 0, 00:08:18.753 "rw_mbytes_per_sec": 0, 00:08:18.753 "w_mbytes_per_sec": 0 00:08:18.753 }, 00:08:18.753 "block_size": 512, 00:08:18.753 "claim_type": "exclusive_write", 00:08:18.753 "claimed": true, 00:08:18.753 "driver_specific": {}, 00:08:18.753 "memory_domains": [ 00:08:18.753 { 00:08:18.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.753 "dma_device_type": 2 00:08:18.753 } 00:08:18.753 ], 00:08:18.753 "name": "Malloc1", 00:08:18.753 "num_blocks": 1048576, 00:08:18.753 "product_name": "Malloc disk", 00:08:18.753 "supported_io_types": { 00:08:18.753 "abort": true, 00:08:18.753 "compare": false, 00:08:18.753 "compare_and_write": false, 00:08:18.753 "flush": true, 00:08:18.753 "nvme_admin": false, 00:08:18.753 "nvme_io": false, 00:08:18.753 "read": true, 00:08:18.753 "reset": true, 00:08:18.753 "unmap": true, 00:08:18.753 "write": true, 00:08:18.753 "write_zeroes": true 00:08:18.753 }, 00:08:18.753 "uuid": "8ffe77cf-90a2-4bef-821a-f89962bbc263", 00:08:18.753 "zoned": false 00:08:18.753 } 00:08:18.753 ]' 00:08:18.753 12:52:59 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:18.753 12:52:59 -- common/autotest_common.sh@1372 -- # bs=512 00:08:19.012 12:52:59 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:19.012 12:52:59 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:19.012 12:52:59 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:19.012 12:52:59 -- common/autotest_common.sh@1377 -- # echo 512 00:08:19.012 12:52:59 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:19.012 12:52:59 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.012 12:52:59 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.012 12:52:59 -- common/autotest_common.sh@1187 -- # local i=0 00:08:19.012 12:52:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.012 12:52:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:19.012 12:52:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:21.543 12:53:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:21.543 12:53:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:21.543 12:53:01 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.543 12:53:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:21.543 12:53:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.543 12:53:01 -- common/autotest_common.sh@1197 -- # return 0 00:08:21.543 12:53:01 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:21.543 12:53:01 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:21.543 12:53:01 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:21.543 12:53:01 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:21.543 12:53:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:21.543 12:53:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:21.543 12:53:01 -- setup/common.sh@80 -- # echo 536870912 00:08:21.543 12:53:01 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:21.543 12:53:01 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:21.543 12:53:01 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:21.543 12:53:01 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:21.543 12:53:01 -- target/filesystem.sh@69 -- # partprobe 00:08:21.543 12:53:01 -- target/filesystem.sh@70 -- # sleep 1 00:08:22.479 12:53:02 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:22.479 12:53:02 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:22.479 12:53:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:22.479 12:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.479 12:53:02 -- common/autotest_common.sh@10 -- # set +x 00:08:22.479 ************************************ 00:08:22.479 START TEST filesystem_in_capsule_ext4 00:08:22.479 ************************************ 00:08:22.479 12:53:02 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:22.479 12:53:02 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:22.479 12:53:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.479 12:53:02 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:22.479 12:53:02 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:22.479 12:53:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:22.479 12:53:02 -- common/autotest_common.sh@914 -- # local i=0 00:08:22.479 12:53:02 -- common/autotest_common.sh@915 -- # local force 00:08:22.479 12:53:02 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:22.479 12:53:02 -- common/autotest_common.sh@918 -- # force=-F 00:08:22.479 12:53:02 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:22.479 mke2fs 1.47.0 (5-Feb-2023) 00:08:22.479 Discarding device blocks: 0/522240 done 00:08:22.479 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:22.479 Filesystem UUID: 681bec85-96ea-4221-8c49-936fec33c4ef 00:08:22.479 Superblock backups stored on blocks: 00:08:22.479 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:22.479 00:08:22.479 Allocating group tables: 0/64 done 00:08:22.479 Writing inode tables: 0/64 done 00:08:22.479 Creating journal (8192 blocks): done 00:08:22.479 Writing superblocks and filesystem accounting information: 0/64 done 00:08:22.479 00:08:22.479 12:53:03 -- common/autotest_common.sh@931 -- # return 0 00:08:22.479 12:53:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.746 12:53:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.746 12:53:08 -- target/filesystem.sh@25 -- # sync 00:08:27.746 12:53:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.746 12:53:08 -- target/filesystem.sh@27 -- # sync 00:08:27.746 12:53:08 -- target/filesystem.sh@29 -- # i=0 00:08:27.746 12:53:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.746 12:53:08 -- target/filesystem.sh@37 -- # kill -0 72520 00:08:27.746 12:53:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.746 12:53:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.746 12:53:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.746 12:53:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.746 ************************************ 00:08:27.746 END TEST filesystem_in_capsule_ext4 00:08:27.746 ************************************ 00:08:27.746 00:08:27.746 real 0m5.575s 00:08:27.746 user 0m0.029s 00:08:27.746 sys 0m0.057s 00:08:27.746 12:53:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.746 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.005 12:53:08 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:28.005 12:53:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:28.005 12:53:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.005 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.005 ************************************ 00:08:28.005 START TEST filesystem_in_capsule_btrfs 00:08:28.005 ************************************ 00:08:28.005 12:53:08 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:28.005 12:53:08 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:28.005 12:53:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.005 12:53:08 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:28.005 12:53:08 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:28.005 12:53:08 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:28.005 12:53:08 -- common/autotest_common.sh@914 -- # local i=0 00:08:28.005 12:53:08 -- common/autotest_common.sh@915 -- # local force 00:08:28.005 12:53:08 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:28.005 12:53:08 -- common/autotest_common.sh@920 -- # force=-f 00:08:28.005 12:53:08 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:28.005 btrfs-progs v6.8.1 00:08:28.005 See https://btrfs.readthedocs.io for more information. 00:08:28.005 00:08:28.005 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:28.005 NOTE: several default settings have changed in version 5.15, please make sure 00:08:28.005 this does not affect your deployments: 00:08:28.005 - DUP for metadata (-m dup) 00:08:28.005 - enabled no-holes (-O no-holes) 00:08:28.005 - enabled free-space-tree (-R free-space-tree) 00:08:28.005 00:08:28.005 Label: (null) 00:08:28.005 UUID: a927e2f7-f201-4463-92bd-dfda555e22b0 00:08:28.005 Node size: 16384 00:08:28.005 Sector size: 4096 (CPU page size: 4096) 00:08:28.005 Filesystem size: 510.00MiB 00:08:28.005 Block group profiles: 00:08:28.005 Data: single 8.00MiB 00:08:28.005 Metadata: DUP 32.00MiB 00:08:28.005 System: DUP 8.00MiB 00:08:28.005 SSD detected: yes 00:08:28.005 Zoned device: no 00:08:28.005 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:28.005 Checksum: crc32c 00:08:28.005 Number of devices: 1 00:08:28.005 Devices: 00:08:28.005 ID SIZE PATH 00:08:28.005 1 510.00MiB /dev/nvme0n1p1 00:08:28.005 00:08:28.005 12:53:08 -- common/autotest_common.sh@931 -- # return 0 00:08:28.005 12:53:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.005 12:53:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.005 12:53:08 -- target/filesystem.sh@25 -- # sync 00:08:28.005 12:53:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.005 12:53:08 -- target/filesystem.sh@27 -- # sync 00:08:28.005 12:53:08 -- target/filesystem.sh@29 -- # i=0 00:08:28.005 12:53:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.264 12:53:08 -- target/filesystem.sh@37 -- # kill -0 72520 00:08:28.264 12:53:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.264 12:53:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.264 12:53:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.264 12:53:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.264 ************************************ 00:08:28.264 END TEST filesystem_in_capsule_btrfs 00:08:28.264 ************************************ 00:08:28.264 00:08:28.264 real 0m0.263s 00:08:28.264 user 0m0.027s 00:08:28.264 sys 0m0.056s 00:08:28.264 12:53:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:28.264 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.264 12:53:08 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:28.264 12:53:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:28.264 12:53:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.264 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.264 ************************************ 00:08:28.264 START TEST filesystem_in_capsule_xfs 00:08:28.264 ************************************ 00:08:28.264 12:53:08 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:28.264 12:53:08 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:28.264 12:53:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.264 12:53:08 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:28.264 12:53:08 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:28.264 12:53:08 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:28.264 12:53:08 -- common/autotest_common.sh@914 -- # local i=0 00:08:28.264 12:53:08 -- common/autotest_common.sh@915 -- # local force 00:08:28.264 12:53:08 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:28.264 12:53:08 -- common/autotest_common.sh@920 -- # force=-f 00:08:28.264 12:53:08 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:28.264 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:28.264 = sectsz=512 attr=2, projid32bit=1 00:08:28.264 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:28.264 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:28.264 data = bsize=4096 blocks=130560, imaxpct=25 00:08:28.264 = sunit=0 swidth=0 blks 00:08:28.264 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:28.264 log =internal log bsize=4096 blocks=16384, version=2 00:08:28.264 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:28.264 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:29.200 Discarding blocks...Done. 00:08:29.200 12:53:09 -- common/autotest_common.sh@931 -- # return 0 00:08:29.200 12:53:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.103 12:53:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.103 12:53:11 -- target/filesystem.sh@25 -- # sync 00:08:31.103 12:53:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.103 12:53:11 -- target/filesystem.sh@27 -- # sync 00:08:31.103 12:53:11 -- target/filesystem.sh@29 -- # i=0 00:08:31.103 12:53:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.103 12:53:11 -- target/filesystem.sh@37 -- # kill -0 72520 00:08:31.104 12:53:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.104 12:53:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.104 12:53:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.104 12:53:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.104 ************************************ 00:08:31.104 END TEST filesystem_in_capsule_xfs 00:08:31.104 ************************************ 00:08:31.104 00:08:31.104 real 0m2.636s 00:08:31.104 user 0m0.026s 00:08:31.104 sys 0m0.050s 00:08:31.104 12:53:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.104 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:08:31.104 12:53:11 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:31.104 12:53:11 -- target/filesystem.sh@93 -- # sync 00:08:31.104 12:53:11 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:31.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.104 12:53:11 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:31.104 12:53:11 -- common/autotest_common.sh@1208 -- # local i=0 00:08:31.104 12:53:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:31.104 12:53:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.104 12:53:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:31.104 12:53:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.104 12:53:11 -- common/autotest_common.sh@1220 -- # return 0 00:08:31.104 12:53:11 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.104 12:53:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.104 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:08:31.104 12:53:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.104 12:53:11 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:31.104 12:53:11 -- target/filesystem.sh@101 -- # killprocess 72520 00:08:31.104 12:53:11 -- common/autotest_common.sh@936 -- # '[' -z 72520 ']' 00:08:31.104 12:53:11 -- common/autotest_common.sh@940 -- # kill -0 72520 00:08:31.104 12:53:11 -- common/autotest_common.sh@941 -- # uname 00:08:31.104 12:53:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:31.104 12:53:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72520 00:08:31.104 12:53:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:31.104 killing process with pid 72520 00:08:31.104 12:53:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:31.104 12:53:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72520' 00:08:31.104 12:53:11 -- common/autotest_common.sh@955 -- # kill 72520 00:08:31.104 12:53:11 -- common/autotest_common.sh@960 -- # wait 72520 00:08:31.363 12:53:12 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:31.363 00:08:31.363 real 0m13.827s 00:08:31.363 user 0m52.916s 00:08:31.363 sys 0m2.054s 00:08:31.363 12:53:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.363 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:08:31.363 ************************************ 00:08:31.363 END TEST nvmf_filesystem_in_capsule 00:08:31.363 ************************************ 00:08:31.363 12:53:12 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:31.363 12:53:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:31.363 12:53:12 -- nvmf/common.sh@116 -- # sync 00:08:31.622 12:53:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:31.622 12:53:12 -- nvmf/common.sh@119 -- # set +e 00:08:31.622 12:53:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:31.622 12:53:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:31.622 rmmod nvme_tcp 00:08:31.622 rmmod nvme_fabrics 00:08:31.622 rmmod nvme_keyring 00:08:31.622 12:53:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:31.622 12:53:12 -- nvmf/common.sh@123 -- # set -e 00:08:31.622 12:53:12 -- nvmf/common.sh@124 -- # return 0 00:08:31.622 12:53:12 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:31.622 12:53:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:31.622 12:53:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:31.622 12:53:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:31.622 12:53:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.622 12:53:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:31.622 12:53:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.622 12:53:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.622 12:53:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.622 12:53:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:31.622 00:08:31.622 real 0m29.292s 00:08:31.622 user 1m48.953s 00:08:31.622 sys 0m4.388s 00:08:31.622 12:53:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.622 ************************************ 00:08:31.622 END TEST nvmf_filesystem 00:08:31.622 ************************************ 00:08:31.622 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:08:31.622 12:53:12 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:31.622 12:53:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:31.622 12:53:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.622 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:08:31.622 ************************************ 00:08:31.622 START TEST nvmf_discovery 00:08:31.622 ************************************ 00:08:31.622 12:53:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:31.622 * Looking for test storage... 00:08:31.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.622 12:53:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:31.622 12:53:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:31.622 12:53:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:31.881 12:53:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:31.881 12:53:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:31.881 12:53:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:31.881 12:53:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:31.881 12:53:12 -- scripts/common.sh@335 -- # IFS=.-: 00:08:31.881 12:53:12 -- scripts/common.sh@335 -- # read -ra ver1 00:08:31.881 12:53:12 -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.881 12:53:12 -- scripts/common.sh@336 -- # read -ra ver2 00:08:31.881 12:53:12 -- scripts/common.sh@337 -- # local 'op=<' 00:08:31.881 12:53:12 -- scripts/common.sh@339 -- # ver1_l=2 00:08:31.881 12:53:12 -- scripts/common.sh@340 -- # ver2_l=1 00:08:31.881 12:53:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:31.881 12:53:12 -- scripts/common.sh@343 -- # case "$op" in 00:08:31.881 12:53:12 -- scripts/common.sh@344 -- # : 1 00:08:31.881 12:53:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:31.881 12:53:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.881 12:53:12 -- scripts/common.sh@364 -- # decimal 1 00:08:31.881 12:53:12 -- scripts/common.sh@352 -- # local d=1 00:08:31.881 12:53:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.881 12:53:12 -- scripts/common.sh@354 -- # echo 1 00:08:31.881 12:53:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:31.881 12:53:12 -- scripts/common.sh@365 -- # decimal 2 00:08:31.881 12:53:12 -- scripts/common.sh@352 -- # local d=2 00:08:31.881 12:53:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.881 12:53:12 -- scripts/common.sh@354 -- # echo 2 00:08:31.881 12:53:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:31.881 12:53:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:31.881 12:53:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:31.881 12:53:12 -- scripts/common.sh@367 -- # return 0 00:08:31.881 12:53:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.881 12:53:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:31.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.881 --rc genhtml_branch_coverage=1 00:08:31.881 --rc genhtml_function_coverage=1 00:08:31.881 --rc genhtml_legend=1 00:08:31.881 --rc geninfo_all_blocks=1 00:08:31.881 --rc geninfo_unexecuted_blocks=1 00:08:31.881 00:08:31.881 ' 00:08:31.881 12:53:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:31.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.881 --rc genhtml_branch_coverage=1 00:08:31.881 --rc genhtml_function_coverage=1 00:08:31.881 --rc genhtml_legend=1 00:08:31.881 --rc geninfo_all_blocks=1 00:08:31.881 --rc geninfo_unexecuted_blocks=1 00:08:31.881 00:08:31.881 ' 00:08:31.881 12:53:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:31.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.881 --rc genhtml_branch_coverage=1 00:08:31.881 --rc genhtml_function_coverage=1 00:08:31.881 --rc genhtml_legend=1 00:08:31.881 --rc geninfo_all_blocks=1 00:08:31.881 --rc geninfo_unexecuted_blocks=1 00:08:31.881 00:08:31.881 ' 00:08:31.881 12:53:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:31.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.881 --rc genhtml_branch_coverage=1 00:08:31.881 --rc genhtml_function_coverage=1 00:08:31.881 --rc genhtml_legend=1 00:08:31.881 --rc geninfo_all_blocks=1 00:08:31.881 --rc geninfo_unexecuted_blocks=1 00:08:31.881 00:08:31.881 ' 00:08:31.881 12:53:12 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.881 12:53:12 -- nvmf/common.sh@7 -- # uname -s 00:08:31.881 12:53:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.881 12:53:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.881 12:53:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.881 12:53:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.881 12:53:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.881 12:53:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.881 12:53:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.881 12:53:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.881 12:53:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.881 12:53:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.881 12:53:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:08:31.881 12:53:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:08:31.881 12:53:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.881 12:53:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.881 12:53:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.881 12:53:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.881 12:53:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.881 12:53:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.881 12:53:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.882 12:53:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.882 12:53:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.882 12:53:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.882 12:53:12 -- paths/export.sh@5 -- # export PATH 00:08:31.882 12:53:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.882 12:53:12 -- nvmf/common.sh@46 -- # : 0 00:08:31.882 12:53:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:31.882 12:53:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:31.882 12:53:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:31.882 12:53:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.882 12:53:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.882 12:53:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:31.882 12:53:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:31.882 12:53:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:31.882 12:53:12 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:31.882 12:53:12 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:31.882 12:53:12 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:31.882 12:53:12 -- target/discovery.sh@15 -- # hash nvme 00:08:31.882 12:53:12 -- target/discovery.sh@20 -- # nvmftestinit 00:08:31.882 12:53:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:31.882 12:53:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.882 12:53:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:31.882 12:53:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:31.882 12:53:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:31.882 12:53:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.882 12:53:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.882 12:53:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.882 12:53:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:31.882 12:53:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:31.882 12:53:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:31.882 12:53:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:31.882 12:53:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:31.882 12:53:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:31.882 12:53:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.882 12:53:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.882 12:53:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.882 12:53:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:31.882 12:53:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.882 12:53:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.882 12:53:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.882 12:53:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.882 12:53:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.882 12:53:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.882 12:53:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.882 12:53:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.882 12:53:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:31.882 12:53:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:31.882 Cannot find device "nvmf_tgt_br" 00:08:31.882 12:53:12 -- nvmf/common.sh@154 -- # true 00:08:31.882 12:53:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.882 Cannot find device "nvmf_tgt_br2" 00:08:31.882 12:53:12 -- nvmf/common.sh@155 -- # true 00:08:31.882 12:53:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:31.882 12:53:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:31.882 Cannot find device "nvmf_tgt_br" 00:08:31.882 12:53:12 -- nvmf/common.sh@157 -- # true 00:08:31.882 12:53:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:31.882 Cannot find device "nvmf_tgt_br2" 00:08:31.882 12:53:12 -- nvmf/common.sh@158 -- # true 00:08:31.882 12:53:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:31.882 12:53:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:31.882 12:53:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.882 12:53:12 -- nvmf/common.sh@161 -- # true 00:08:31.882 12:53:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.882 12:53:12 -- nvmf/common.sh@162 -- # true 00:08:31.882 12:53:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.882 12:53:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.882 12:53:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.882 12:53:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.882 12:53:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:32.141 12:53:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:32.141 12:53:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:32.141 12:53:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:32.141 12:53:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:32.141 12:53:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:32.141 12:53:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:32.141 12:53:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:32.141 12:53:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:32.141 12:53:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:32.141 12:53:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:32.141 12:53:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:32.141 12:53:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:32.141 12:53:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:32.141 12:53:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:32.141 12:53:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:32.141 12:53:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:32.141 12:53:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:32.141 12:53:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:32.141 12:53:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:32.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:08:32.141 00:08:32.141 --- 10.0.0.2 ping statistics --- 00:08:32.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.141 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:32.141 12:53:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:32.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:32.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:32.141 00:08:32.141 --- 10.0.0.3 ping statistics --- 00:08:32.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.141 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:32.141 12:53:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:32.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:32.141 00:08:32.141 --- 10.0.0.1 ping statistics --- 00:08:32.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.141 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:32.141 12:53:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.141 12:53:12 -- nvmf/common.sh@421 -- # return 0 00:08:32.141 12:53:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:32.141 12:53:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.141 12:53:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:32.141 12:53:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:32.141 12:53:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.141 12:53:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:32.141 12:53:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:32.141 12:53:12 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:32.141 12:53:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:32.141 12:53:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.141 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:08:32.141 12:53:12 -- nvmf/common.sh@469 -- # nvmfpid=73058 00:08:32.141 12:53:12 -- nvmf/common.sh@470 -- # waitforlisten 73058 00:08:32.141 12:53:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.141 12:53:12 -- common/autotest_common.sh@829 -- # '[' -z 73058 ']' 00:08:32.141 12:53:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.141 12:53:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.141 12:53:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.141 12:53:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.141 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:08:32.141 [2024-12-13 12:53:12.885943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:32.141 [2024-12-13 12:53:12.886052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.401 [2024-12-13 12:53:13.024124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.401 [2024-12-13 12:53:13.103737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:32.401 [2024-12-13 12:53:13.103936] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.401 [2024-12-13 12:53:13.103953] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.401 [2024-12-13 12:53:13.103965] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.401 [2024-12-13 12:53:13.104092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.401 [2024-12-13 12:53:13.104238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.401 [2024-12-13 12:53:13.104946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.401 [2024-12-13 12:53:13.104956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.337 12:53:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.337 12:53:13 -- common/autotest_common.sh@862 -- # return 0 00:08:33.337 12:53:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:33.337 12:53:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.337 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:08:33.337 12:53:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.337 12:53:13 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.337 12:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.337 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:08:33.337 [2024-12-13 12:53:13.934585] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.337 12:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.337 12:53:13 -- target/discovery.sh@26 -- # seq 1 4 00:08:33.337 12:53:13 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.338 12:53:13 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:33.338 12:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 Null1 00:08:33.338 12:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:13 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:33.338 12:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:13 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:33.338 12:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:13 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.338 12:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 [2024-12-13 12:53:13.991295] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.338 12:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:13 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.338 12:53:13 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:33.338 12:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 Null2 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.338 12:53:14 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 Null3 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.338 12:53:14 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 Null4 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:33.338 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.338 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.338 12:53:14 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 4420 00:08:33.597 00:08:33.597 Discovery Log Number of Records 6, Generation counter 6 00:08:33.597 =====Discovery Log Entry 0====== 00:08:33.597 trtype: tcp 00:08:33.597 adrfam: ipv4 00:08:33.597 subtype: current discovery subsystem 00:08:33.597 treq: not required 00:08:33.597 portid: 0 00:08:33.597 trsvcid: 4420 00:08:33.597 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:33.597 traddr: 10.0.0.2 00:08:33.597 eflags: explicit discovery connections, duplicate discovery information 00:08:33.597 sectype: none 00:08:33.597 =====Discovery Log Entry 1====== 00:08:33.597 trtype: tcp 00:08:33.597 adrfam: ipv4 00:08:33.597 subtype: nvme subsystem 00:08:33.597 treq: not required 00:08:33.597 portid: 0 00:08:33.597 trsvcid: 4420 00:08:33.597 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:33.597 traddr: 10.0.0.2 00:08:33.597 eflags: none 00:08:33.597 sectype: none 00:08:33.597 =====Discovery Log Entry 2====== 00:08:33.597 trtype: tcp 00:08:33.597 adrfam: ipv4 00:08:33.597 subtype: nvme subsystem 00:08:33.597 treq: not required 00:08:33.597 portid: 0 00:08:33.597 trsvcid: 4420 00:08:33.597 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:33.597 traddr: 10.0.0.2 00:08:33.597 eflags: none 00:08:33.597 sectype: none 00:08:33.597 =====Discovery Log Entry 3====== 00:08:33.597 trtype: tcp 00:08:33.597 adrfam: ipv4 00:08:33.597 subtype: nvme subsystem 00:08:33.597 treq: not required 00:08:33.597 portid: 0 00:08:33.597 trsvcid: 4420 00:08:33.597 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:33.597 traddr: 10.0.0.2 00:08:33.597 eflags: none 00:08:33.597 sectype: none 00:08:33.597 =====Discovery Log Entry 4====== 00:08:33.597 trtype: tcp 00:08:33.597 adrfam: ipv4 00:08:33.597 subtype: nvme subsystem 00:08:33.597 treq: not required 00:08:33.597 portid: 0 00:08:33.597 trsvcid: 4420 00:08:33.597 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:33.597 traddr: 10.0.0.2 00:08:33.597 eflags: none 00:08:33.597 sectype: none 00:08:33.597 =====Discovery Log Entry 5====== 00:08:33.597 trtype: tcp 00:08:33.597 adrfam: ipv4 00:08:33.597 subtype: discovery subsystem referral 00:08:33.597 treq: not required 00:08:33.597 portid: 0 00:08:33.597 trsvcid: 4430 00:08:33.597 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:33.597 traddr: 10.0.0.2 00:08:33.597 eflags: none 00:08:33.597 sectype: none 00:08:33.597 Perform nvmf subsystem discovery via RPC 00:08:33.597 12:53:14 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:33.597 12:53:14 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:33.597 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.597 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.597 [2024-12-13 12:53:14.223377] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:33.597 [ 00:08:33.597 { 00:08:33.597 "allow_any_host": true, 00:08:33.597 "hosts": [], 00:08:33.597 "listen_addresses": [ 00:08:33.597 { 00:08:33.597 "adrfam": "IPv4", 00:08:33.597 "traddr": "10.0.0.2", 00:08:33.597 "transport": "TCP", 00:08:33.597 "trsvcid": "4420", 00:08:33.597 "trtype": "TCP" 00:08:33.597 } 00:08:33.597 ], 00:08:33.597 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:33.597 "subtype": "Discovery" 00:08:33.597 }, 00:08:33.597 { 00:08:33.597 "allow_any_host": true, 00:08:33.597 "hosts": [], 00:08:33.597 "listen_addresses": [ 00:08:33.597 { 00:08:33.597 "adrfam": "IPv4", 00:08:33.597 "traddr": "10.0.0.2", 00:08:33.597 "transport": "TCP", 00:08:33.597 "trsvcid": "4420", 00:08:33.597 "trtype": "TCP" 00:08:33.597 } 00:08:33.597 ], 00:08:33.597 "max_cntlid": 65519, 00:08:33.597 "max_namespaces": 32, 00:08:33.597 "min_cntlid": 1, 00:08:33.597 "model_number": "SPDK bdev Controller", 00:08:33.597 "namespaces": [ 00:08:33.597 { 00:08:33.597 "bdev_name": "Null1", 00:08:33.597 "name": "Null1", 00:08:33.597 "nguid": "1B839054E9274F068D3FEF8E9D248288", 00:08:33.597 "nsid": 1, 00:08:33.597 "uuid": "1b839054-e927-4f06-8d3f-ef8e9d248288" 00:08:33.597 } 00:08:33.597 ], 00:08:33.597 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.597 "serial_number": "SPDK00000000000001", 00:08:33.597 "subtype": "NVMe" 00:08:33.597 }, 00:08:33.597 { 00:08:33.597 "allow_any_host": true, 00:08:33.597 "hosts": [], 00:08:33.597 "listen_addresses": [ 00:08:33.597 { 00:08:33.597 "adrfam": "IPv4", 00:08:33.597 "traddr": "10.0.0.2", 00:08:33.597 "transport": "TCP", 00:08:33.597 "trsvcid": "4420", 00:08:33.597 "trtype": "TCP" 00:08:33.597 } 00:08:33.597 ], 00:08:33.597 "max_cntlid": 65519, 00:08:33.597 "max_namespaces": 32, 00:08:33.597 "min_cntlid": 1, 00:08:33.597 "model_number": "SPDK bdev Controller", 00:08:33.597 "namespaces": [ 00:08:33.597 { 00:08:33.597 "bdev_name": "Null2", 00:08:33.597 "name": "Null2", 00:08:33.597 "nguid": "CC45196E36784ABABA192611B5FBA01A", 00:08:33.597 "nsid": 1, 00:08:33.597 "uuid": "cc45196e-3678-4aba-ba19-2611b5fba01a" 00:08:33.597 } 00:08:33.597 ], 00:08:33.597 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:33.597 "serial_number": "SPDK00000000000002", 00:08:33.597 "subtype": "NVMe" 00:08:33.597 }, 00:08:33.597 { 00:08:33.597 "allow_any_host": true, 00:08:33.597 "hosts": [], 00:08:33.597 "listen_addresses": [ 00:08:33.597 { 00:08:33.597 "adrfam": "IPv4", 00:08:33.597 "traddr": "10.0.0.2", 00:08:33.597 "transport": "TCP", 00:08:33.597 "trsvcid": "4420", 00:08:33.597 "trtype": "TCP" 00:08:33.597 } 00:08:33.597 ], 00:08:33.597 "max_cntlid": 65519, 00:08:33.597 "max_namespaces": 32, 00:08:33.597 "min_cntlid": 1, 00:08:33.597 "model_number": "SPDK bdev Controller", 00:08:33.597 "namespaces": [ 00:08:33.597 { 00:08:33.597 "bdev_name": "Null3", 00:08:33.597 "name": "Null3", 00:08:33.597 "nguid": "AD94F6EEE36F46A5AAF26274508A2E0A", 00:08:33.597 "nsid": 1, 00:08:33.597 "uuid": "ad94f6ee-e36f-46a5-aaf2-6274508a2e0a" 00:08:33.597 } 00:08:33.597 ], 00:08:33.597 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:33.597 "serial_number": "SPDK00000000000003", 00:08:33.597 "subtype": "NVMe" 00:08:33.597 }, 00:08:33.597 { 00:08:33.597 "allow_any_host": true, 00:08:33.597 "hosts": [], 00:08:33.597 "listen_addresses": [ 00:08:33.597 { 00:08:33.597 "adrfam": "IPv4", 00:08:33.597 "traddr": "10.0.0.2", 00:08:33.597 "transport": "TCP", 00:08:33.597 "trsvcid": "4420", 00:08:33.597 "trtype": "TCP" 00:08:33.597 } 00:08:33.597 ], 00:08:33.597 "max_cntlid": 65519, 00:08:33.597 "max_namespaces": 32, 00:08:33.597 "min_cntlid": 1, 00:08:33.597 "model_number": "SPDK bdev Controller", 00:08:33.597 "namespaces": [ 00:08:33.597 { 00:08:33.597 "bdev_name": "Null4", 00:08:33.597 "name": "Null4", 00:08:33.597 "nguid": "2135929C9B0242CE961A97B9D2F4437B", 00:08:33.597 "nsid": 1, 00:08:33.597 "uuid": "2135929c-9b02-42ce-961a-97b9d2f4437b" 00:08:33.597 } 00:08:33.597 ], 00:08:33.597 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:33.597 "serial_number": "SPDK00000000000004", 00:08:33.597 "subtype": "NVMe" 00:08:33.597 } 00:08:33.597 ] 00:08:33.597 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.597 12:53:14 -- target/discovery.sh@42 -- # seq 1 4 00:08:33.597 12:53:14 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.597 12:53:14 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.597 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.597 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.597 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.597 12:53:14 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:33.597 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.597 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.597 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.597 12:53:14 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.597 12:53:14 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:33.597 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.597 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.597 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.597 12:53:14 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:33.597 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.597 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.598 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.598 12:53:14 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.598 12:53:14 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:33.598 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.598 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.598 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.598 12:53:14 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:33.598 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.598 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.598 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.598 12:53:14 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.598 12:53:14 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:33.598 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.598 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.598 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.598 12:53:14 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:33.598 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.598 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.598 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.598 12:53:14 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:33.598 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.598 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.598 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.598 12:53:14 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:33.598 12:53:14 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:33.598 12:53:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.598 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.598 12:53:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.857 12:53:14 -- target/discovery.sh@49 -- # check_bdevs= 00:08:33.857 12:53:14 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:33.857 12:53:14 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:33.857 12:53:14 -- target/discovery.sh@57 -- # nvmftestfini 00:08:33.857 12:53:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:33.857 12:53:14 -- nvmf/common.sh@116 -- # sync 00:08:33.857 12:53:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:33.857 12:53:14 -- nvmf/common.sh@119 -- # set +e 00:08:33.857 12:53:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:33.857 12:53:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:33.857 rmmod nvme_tcp 00:08:33.857 rmmod nvme_fabrics 00:08:33.857 rmmod nvme_keyring 00:08:33.857 12:53:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:33.857 12:53:14 -- nvmf/common.sh@123 -- # set -e 00:08:33.857 12:53:14 -- nvmf/common.sh@124 -- # return 0 00:08:33.857 12:53:14 -- nvmf/common.sh@477 -- # '[' -n 73058 ']' 00:08:33.857 12:53:14 -- nvmf/common.sh@478 -- # killprocess 73058 00:08:33.857 12:53:14 -- common/autotest_common.sh@936 -- # '[' -z 73058 ']' 00:08:33.857 12:53:14 -- common/autotest_common.sh@940 -- # kill -0 73058 00:08:33.857 12:53:14 -- common/autotest_common.sh@941 -- # uname 00:08:33.857 12:53:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:33.857 12:53:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73058 00:08:33.857 killing process with pid 73058 00:08:33.857 12:53:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:33.857 12:53:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:33.857 12:53:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73058' 00:08:33.857 12:53:14 -- common/autotest_common.sh@955 -- # kill 73058 00:08:33.858 [2024-12-13 12:53:14.490891] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:33.858 12:53:14 -- common/autotest_common.sh@960 -- # wait 73058 00:08:34.117 12:53:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:34.117 12:53:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:34.117 12:53:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:34.117 12:53:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.117 12:53:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:34.117 12:53:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.117 12:53:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.117 12:53:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.117 12:53:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:34.117 ************************************ 00:08:34.117 END TEST nvmf_discovery 00:08:34.117 ************************************ 00:08:34.117 00:08:34.117 real 0m2.428s 00:08:34.117 user 0m6.727s 00:08:34.117 sys 0m0.620s 00:08:34.117 12:53:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.117 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:34.117 12:53:14 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:34.117 12:53:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:34.117 12:53:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.117 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:08:34.117 ************************************ 00:08:34.117 START TEST nvmf_referrals 00:08:34.117 ************************************ 00:08:34.117 12:53:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:34.117 * Looking for test storage... 00:08:34.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.117 12:53:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:34.117 12:53:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:34.117 12:53:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:34.376 12:53:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:34.376 12:53:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:34.376 12:53:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:34.376 12:53:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:34.376 12:53:14 -- scripts/common.sh@335 -- # IFS=.-: 00:08:34.376 12:53:14 -- scripts/common.sh@335 -- # read -ra ver1 00:08:34.376 12:53:14 -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.376 12:53:14 -- scripts/common.sh@336 -- # read -ra ver2 00:08:34.376 12:53:14 -- scripts/common.sh@337 -- # local 'op=<' 00:08:34.376 12:53:14 -- scripts/common.sh@339 -- # ver1_l=2 00:08:34.376 12:53:14 -- scripts/common.sh@340 -- # ver2_l=1 00:08:34.376 12:53:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:34.376 12:53:14 -- scripts/common.sh@343 -- # case "$op" in 00:08:34.376 12:53:14 -- scripts/common.sh@344 -- # : 1 00:08:34.376 12:53:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:34.376 12:53:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.376 12:53:14 -- scripts/common.sh@364 -- # decimal 1 00:08:34.376 12:53:14 -- scripts/common.sh@352 -- # local d=1 00:08:34.376 12:53:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.376 12:53:14 -- scripts/common.sh@354 -- # echo 1 00:08:34.376 12:53:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:34.376 12:53:14 -- scripts/common.sh@365 -- # decimal 2 00:08:34.376 12:53:14 -- scripts/common.sh@352 -- # local d=2 00:08:34.376 12:53:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.376 12:53:14 -- scripts/common.sh@354 -- # echo 2 00:08:34.376 12:53:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:34.376 12:53:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:34.376 12:53:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:34.376 12:53:14 -- scripts/common.sh@367 -- # return 0 00:08:34.376 12:53:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.376 12:53:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:34.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.376 --rc genhtml_branch_coverage=1 00:08:34.376 --rc genhtml_function_coverage=1 00:08:34.376 --rc genhtml_legend=1 00:08:34.376 --rc geninfo_all_blocks=1 00:08:34.376 --rc geninfo_unexecuted_blocks=1 00:08:34.376 00:08:34.376 ' 00:08:34.376 12:53:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:34.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.376 --rc genhtml_branch_coverage=1 00:08:34.376 --rc genhtml_function_coverage=1 00:08:34.376 --rc genhtml_legend=1 00:08:34.376 --rc geninfo_all_blocks=1 00:08:34.376 --rc geninfo_unexecuted_blocks=1 00:08:34.376 00:08:34.376 ' 00:08:34.376 12:53:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:34.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.376 --rc genhtml_branch_coverage=1 00:08:34.376 --rc genhtml_function_coverage=1 00:08:34.376 --rc genhtml_legend=1 00:08:34.376 --rc geninfo_all_blocks=1 00:08:34.376 --rc geninfo_unexecuted_blocks=1 00:08:34.376 00:08:34.376 ' 00:08:34.376 12:53:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:34.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.376 --rc genhtml_branch_coverage=1 00:08:34.376 --rc genhtml_function_coverage=1 00:08:34.376 --rc genhtml_legend=1 00:08:34.376 --rc geninfo_all_blocks=1 00:08:34.376 --rc geninfo_unexecuted_blocks=1 00:08:34.376 00:08:34.376 ' 00:08:34.376 12:53:14 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.376 12:53:14 -- nvmf/common.sh@7 -- # uname -s 00:08:34.376 12:53:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.376 12:53:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.376 12:53:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.376 12:53:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.376 12:53:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.376 12:53:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.376 12:53:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.376 12:53:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.376 12:53:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.376 12:53:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.376 12:53:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:08:34.376 12:53:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:08:34.376 12:53:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.376 12:53:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.376 12:53:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.376 12:53:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.376 12:53:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.377 12:53:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.377 12:53:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.377 12:53:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.377 12:53:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.377 12:53:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.377 12:53:15 -- paths/export.sh@5 -- # export PATH 00:08:34.377 12:53:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.377 12:53:15 -- nvmf/common.sh@46 -- # : 0 00:08:34.377 12:53:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:34.377 12:53:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:34.377 12:53:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:34.377 12:53:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.377 12:53:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.377 12:53:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:34.377 12:53:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:34.377 12:53:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:34.377 12:53:15 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:34.377 12:53:15 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:34.377 12:53:15 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:34.377 12:53:15 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:34.377 12:53:15 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:34.377 12:53:15 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:34.377 12:53:15 -- target/referrals.sh@37 -- # nvmftestinit 00:08:34.377 12:53:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:34.377 12:53:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.377 12:53:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:34.377 12:53:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:34.377 12:53:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:34.377 12:53:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.377 12:53:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.377 12:53:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.377 12:53:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:34.377 12:53:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:34.377 12:53:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:34.377 12:53:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:34.377 12:53:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:34.377 12:53:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:34.377 12:53:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.377 12:53:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.377 12:53:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:34.377 12:53:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:34.377 12:53:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.377 12:53:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.377 12:53:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.377 12:53:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.377 12:53:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.377 12:53:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.377 12:53:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.377 12:53:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.377 12:53:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:34.377 12:53:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:34.377 Cannot find device "nvmf_tgt_br" 00:08:34.377 12:53:15 -- nvmf/common.sh@154 -- # true 00:08:34.377 12:53:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.377 Cannot find device "nvmf_tgt_br2" 00:08:34.377 12:53:15 -- nvmf/common.sh@155 -- # true 00:08:34.377 12:53:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:34.377 12:53:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:34.377 Cannot find device "nvmf_tgt_br" 00:08:34.377 12:53:15 -- nvmf/common.sh@157 -- # true 00:08:34.377 12:53:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:34.377 Cannot find device "nvmf_tgt_br2" 00:08:34.377 12:53:15 -- nvmf/common.sh@158 -- # true 00:08:34.377 12:53:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:34.377 12:53:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:34.377 12:53:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.377 12:53:15 -- nvmf/common.sh@161 -- # true 00:08:34.377 12:53:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.636 12:53:15 -- nvmf/common.sh@162 -- # true 00:08:34.636 12:53:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.636 12:53:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.636 12:53:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.636 12:53:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.636 12:53:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.636 12:53:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.636 12:53:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.636 12:53:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:34.636 12:53:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:34.636 12:53:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:34.636 12:53:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:34.636 12:53:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:34.636 12:53:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:34.636 12:53:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.636 12:53:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.636 12:53:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.636 12:53:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:34.636 12:53:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:34.636 12:53:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:34.636 12:53:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.636 12:53:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.636 12:53:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.636 12:53:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.636 12:53:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:34.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:08:34.636 00:08:34.636 --- 10.0.0.2 ping statistics --- 00:08:34.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.636 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:34.636 12:53:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:34.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:34.636 00:08:34.636 --- 10.0.0.3 ping statistics --- 00:08:34.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.636 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:34.636 12:53:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:34.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:34.636 00:08:34.636 --- 10.0.0.1 ping statistics --- 00:08:34.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.636 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:34.636 12:53:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.636 12:53:15 -- nvmf/common.sh@421 -- # return 0 00:08:34.636 12:53:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:34.636 12:53:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.636 12:53:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:34.636 12:53:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:34.636 12:53:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.636 12:53:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:34.636 12:53:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:34.636 12:53:15 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:34.636 12:53:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:34.636 12:53:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:34.636 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:08:34.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.636 12:53:15 -- nvmf/common.sh@469 -- # nvmfpid=73297 00:08:34.636 12:53:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.636 12:53:15 -- nvmf/common.sh@470 -- # waitforlisten 73297 00:08:34.636 12:53:15 -- common/autotest_common.sh@829 -- # '[' -z 73297 ']' 00:08:34.636 12:53:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.636 12:53:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.636 12:53:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.636 12:53:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.636 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:08:34.636 [2024-12-13 12:53:15.398391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:34.636 [2024-12-13 12:53:15.398678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.895 [2024-12-13 12:53:15.533429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.895 [2024-12-13 12:53:15.597147] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:34.895 [2024-12-13 12:53:15.597620] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.895 [2024-12-13 12:53:15.597641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.895 [2024-12-13 12:53:15.597660] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.895 [2024-12-13 12:53:15.597835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.895 [2024-12-13 12:53:15.597992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.895 [2024-12-13 12:53:15.598107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.895 [2024-12-13 12:53:15.598114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.831 12:53:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.831 12:53:16 -- common/autotest_common.sh@862 -- # return 0 00:08:35.831 12:53:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:35.831 12:53:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.831 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.831 12:53:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.831 12:53:16 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.831 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.831 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.831 [2024-12-13 12:53:16.498479] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.831 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.831 12:53:16 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:35.831 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.831 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.831 [2024-12-13 12:53:16.521916] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:35.831 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.831 12:53:16 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:35.831 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.831 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.831 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.831 12:53:16 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:35.831 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.831 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.831 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.831 12:53:16 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:35.831 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.831 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.831 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.831 12:53:16 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:35.831 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.831 12:53:16 -- target/referrals.sh@48 -- # jq length 00:08:35.831 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.831 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.831 12:53:16 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:36.090 12:53:16 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:36.090 12:53:16 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.090 12:53:16 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.090 12:53:16 -- target/referrals.sh@21 -- # sort 00:08:36.090 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.090 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:36.090 12:53:16 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.090 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.090 12:53:16 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.090 12:53:16 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.090 12:53:16 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:36.090 12:53:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.090 12:53:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.090 12:53:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.090 12:53:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.090 12:53:16 -- target/referrals.sh@26 -- # sort 00:08:36.090 12:53:16 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.090 12:53:16 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.090 12:53:16 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.090 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.090 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:36.090 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.090 12:53:16 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.090 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.090 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:36.090 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.090 12:53:16 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.090 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.090 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:36.090 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.090 12:53:16 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.090 12:53:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.090 12:53:16 -- target/referrals.sh@56 -- # jq length 00:08:36.090 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:36.090 12:53:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.090 12:53:16 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:36.090 12:53:16 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:36.090 12:53:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.090 12:53:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.090 12:53:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.090 12:53:16 -- target/referrals.sh@26 -- # sort 00:08:36.090 12:53:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.348 12:53:17 -- target/referrals.sh@26 -- # echo 00:08:36.348 12:53:17 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:36.348 12:53:17 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:36.348 12:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.348 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:08:36.348 12:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.348 12:53:17 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:36.348 12:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.348 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:08:36.348 12:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.348 12:53:17 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:36.348 12:53:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.348 12:53:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.348 12:53:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.348 12:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.348 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:08:36.348 12:53:17 -- target/referrals.sh@21 -- # sort 00:08:36.348 12:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.348 12:53:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:36.348 12:53:17 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:36.348 12:53:17 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:36.348 12:53:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.348 12:53:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.348 12:53:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.349 12:53:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.349 12:53:17 -- target/referrals.sh@26 -- # sort 00:08:36.607 12:53:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:36.607 12:53:17 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:36.607 12:53:17 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:36.607 12:53:17 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:36.607 12:53:17 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:36.607 12:53:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:36.607 12:53:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.607 12:53:17 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:36.607 12:53:17 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:36.607 12:53:17 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:36.607 12:53:17 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:36.607 12:53:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.607 12:53:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:36.865 12:53:17 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:36.865 12:53:17 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:36.865 12:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.865 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:08:36.865 12:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.865 12:53:17 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:36.865 12:53:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.865 12:53:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.865 12:53:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.865 12:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.865 12:53:17 -- target/referrals.sh@21 -- # sort 00:08:36.865 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:08:36.865 12:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.865 12:53:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:36.865 12:53:17 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:36.865 12:53:17 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:36.865 12:53:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.865 12:53:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.865 12:53:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.865 12:53:17 -- target/referrals.sh@26 -- # sort 00:08:36.865 12:53:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.865 12:53:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:36.865 12:53:17 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:36.865 12:53:17 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:36.865 12:53:17 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:36.865 12:53:17 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:36.865 12:53:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.865 12:53:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:37.124 12:53:17 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:37.124 12:53:17 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:37.124 12:53:17 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:37.124 12:53:17 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:37.124 12:53:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.124 12:53:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:37.124 12:53:17 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:37.124 12:53:17 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:37.124 12:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.124 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:08:37.124 12:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.124 12:53:17 -- target/referrals.sh@82 -- # jq length 00:08:37.124 12:53:17 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.124 12:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.124 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:08:37.124 12:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.124 12:53:17 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:37.124 12:53:17 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:37.124 12:53:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.124 12:53:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.124 12:53:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.124 12:53:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.124 12:53:17 -- target/referrals.sh@26 -- # sort 00:08:37.382 12:53:18 -- target/referrals.sh@26 -- # echo 00:08:37.382 12:53:18 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:37.382 12:53:18 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:37.382 12:53:18 -- target/referrals.sh@86 -- # nvmftestfini 00:08:37.382 12:53:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:37.382 12:53:18 -- nvmf/common.sh@116 -- # sync 00:08:37.382 12:53:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:37.382 12:53:18 -- nvmf/common.sh@119 -- # set +e 00:08:37.383 12:53:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:37.383 12:53:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:37.383 rmmod nvme_tcp 00:08:37.383 rmmod nvme_fabrics 00:08:37.383 rmmod nvme_keyring 00:08:37.383 12:53:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:37.383 12:53:18 -- nvmf/common.sh@123 -- # set -e 00:08:37.383 12:53:18 -- nvmf/common.sh@124 -- # return 0 00:08:37.383 12:53:18 -- nvmf/common.sh@477 -- # '[' -n 73297 ']' 00:08:37.383 12:53:18 -- nvmf/common.sh@478 -- # killprocess 73297 00:08:37.383 12:53:18 -- common/autotest_common.sh@936 -- # '[' -z 73297 ']' 00:08:37.383 12:53:18 -- common/autotest_common.sh@940 -- # kill -0 73297 00:08:37.383 12:53:18 -- common/autotest_common.sh@941 -- # uname 00:08:37.383 12:53:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:37.383 12:53:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73297 00:08:37.383 killing process with pid 73297 00:08:37.383 12:53:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:37.383 12:53:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:37.383 12:53:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73297' 00:08:37.383 12:53:18 -- common/autotest_common.sh@955 -- # kill 73297 00:08:37.383 12:53:18 -- common/autotest_common.sh@960 -- # wait 73297 00:08:37.641 12:53:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:37.641 12:53:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:37.642 12:53:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:37.642 12:53:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.642 12:53:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:37.642 12:53:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.642 12:53:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.642 12:53:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.642 12:53:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:37.642 00:08:37.642 real 0m3.601s 00:08:37.642 user 0m12.191s 00:08:37.642 sys 0m0.857s 00:08:37.642 12:53:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.642 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:08:37.642 ************************************ 00:08:37.642 END TEST nvmf_referrals 00:08:37.642 ************************************ 00:08:37.901 12:53:18 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.901 12:53:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:37.901 12:53:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.901 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:08:37.901 ************************************ 00:08:37.901 START TEST nvmf_connect_disconnect 00:08:37.901 ************************************ 00:08:37.901 12:53:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.901 * Looking for test storage... 00:08:37.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:37.901 12:53:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:37.901 12:53:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:37.901 12:53:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:37.901 12:53:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:37.901 12:53:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:37.901 12:53:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:37.901 12:53:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:37.901 12:53:18 -- scripts/common.sh@335 -- # IFS=.-: 00:08:37.901 12:53:18 -- scripts/common.sh@335 -- # read -ra ver1 00:08:37.901 12:53:18 -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.901 12:53:18 -- scripts/common.sh@336 -- # read -ra ver2 00:08:37.901 12:53:18 -- scripts/common.sh@337 -- # local 'op=<' 00:08:37.901 12:53:18 -- scripts/common.sh@339 -- # ver1_l=2 00:08:37.901 12:53:18 -- scripts/common.sh@340 -- # ver2_l=1 00:08:37.901 12:53:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:37.901 12:53:18 -- scripts/common.sh@343 -- # case "$op" in 00:08:37.901 12:53:18 -- scripts/common.sh@344 -- # : 1 00:08:37.901 12:53:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:37.901 12:53:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.901 12:53:18 -- scripts/common.sh@364 -- # decimal 1 00:08:37.901 12:53:18 -- scripts/common.sh@352 -- # local d=1 00:08:37.901 12:53:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.901 12:53:18 -- scripts/common.sh@354 -- # echo 1 00:08:37.901 12:53:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:37.901 12:53:18 -- scripts/common.sh@365 -- # decimal 2 00:08:37.901 12:53:18 -- scripts/common.sh@352 -- # local d=2 00:08:37.901 12:53:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.901 12:53:18 -- scripts/common.sh@354 -- # echo 2 00:08:37.901 12:53:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:37.901 12:53:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:37.901 12:53:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:37.901 12:53:18 -- scripts/common.sh@367 -- # return 0 00:08:37.901 12:53:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.901 12:53:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:37.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.901 --rc genhtml_branch_coverage=1 00:08:37.901 --rc genhtml_function_coverage=1 00:08:37.901 --rc genhtml_legend=1 00:08:37.901 --rc geninfo_all_blocks=1 00:08:37.901 --rc geninfo_unexecuted_blocks=1 00:08:37.901 00:08:37.901 ' 00:08:37.901 12:53:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:37.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.901 --rc genhtml_branch_coverage=1 00:08:37.901 --rc genhtml_function_coverage=1 00:08:37.901 --rc genhtml_legend=1 00:08:37.901 --rc geninfo_all_blocks=1 00:08:37.901 --rc geninfo_unexecuted_blocks=1 00:08:37.901 00:08:37.901 ' 00:08:37.901 12:53:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:37.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.901 --rc genhtml_branch_coverage=1 00:08:37.901 --rc genhtml_function_coverage=1 00:08:37.901 --rc genhtml_legend=1 00:08:37.901 --rc geninfo_all_blocks=1 00:08:37.901 --rc geninfo_unexecuted_blocks=1 00:08:37.901 00:08:37.901 ' 00:08:37.901 12:53:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:37.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.901 --rc genhtml_branch_coverage=1 00:08:37.901 --rc genhtml_function_coverage=1 00:08:37.901 --rc genhtml_legend=1 00:08:37.901 --rc geninfo_all_blocks=1 00:08:37.901 --rc geninfo_unexecuted_blocks=1 00:08:37.901 00:08:37.901 ' 00:08:37.901 12:53:18 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.901 12:53:18 -- nvmf/common.sh@7 -- # uname -s 00:08:37.901 12:53:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.901 12:53:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.901 12:53:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.901 12:53:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.901 12:53:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.901 12:53:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.901 12:53:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.901 12:53:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.901 12:53:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.901 12:53:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.901 12:53:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:08:37.901 12:53:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:08:37.901 12:53:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.901 12:53:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.901 12:53:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:37.901 12:53:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.901 12:53:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.901 12:53:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.901 12:53:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.901 12:53:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.901 12:53:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.902 12:53:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.902 12:53:18 -- paths/export.sh@5 -- # export PATH 00:08:37.902 12:53:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.902 12:53:18 -- nvmf/common.sh@46 -- # : 0 00:08:37.902 12:53:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:37.902 12:53:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:37.902 12:53:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:37.902 12:53:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.902 12:53:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.902 12:53:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:37.902 12:53:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:37.902 12:53:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:37.902 12:53:18 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.902 12:53:18 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.902 12:53:18 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:37.902 12:53:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:37.902 12:53:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.902 12:53:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:37.902 12:53:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:37.902 12:53:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:37.902 12:53:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.902 12:53:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.902 12:53:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.902 12:53:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:37.902 12:53:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:37.902 12:53:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:37.902 12:53:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:37.902 12:53:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:37.902 12:53:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:37.902 12:53:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.902 12:53:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.902 12:53:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:37.902 12:53:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:37.902 12:53:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:37.902 12:53:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:37.902 12:53:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:37.902 12:53:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.902 12:53:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:37.902 12:53:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:37.902 12:53:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:37.902 12:53:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:37.902 12:53:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:37.902 12:53:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:37.902 Cannot find device "nvmf_tgt_br" 00:08:37.902 12:53:18 -- nvmf/common.sh@154 -- # true 00:08:37.902 12:53:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.160 Cannot find device "nvmf_tgt_br2" 00:08:38.160 12:53:18 -- nvmf/common.sh@155 -- # true 00:08:38.160 12:53:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:38.160 12:53:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:38.160 Cannot find device "nvmf_tgt_br" 00:08:38.160 12:53:18 -- nvmf/common.sh@157 -- # true 00:08:38.160 12:53:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:38.160 Cannot find device "nvmf_tgt_br2" 00:08:38.160 12:53:18 -- nvmf/common.sh@158 -- # true 00:08:38.160 12:53:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:38.160 12:53:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:38.160 12:53:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.160 12:53:18 -- nvmf/common.sh@161 -- # true 00:08:38.160 12:53:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.160 12:53:18 -- nvmf/common.sh@162 -- # true 00:08:38.160 12:53:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.160 12:53:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.160 12:53:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.160 12:53:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.160 12:53:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.160 12:53:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.160 12:53:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.160 12:53:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:38.160 12:53:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:38.160 12:53:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:38.160 12:53:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:38.160 12:53:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:38.160 12:53:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:38.160 12:53:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.160 12:53:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.160 12:53:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.160 12:53:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:38.160 12:53:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:38.160 12:53:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.160 12:53:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.419 12:53:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.419 12:53:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.419 12:53:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.419 12:53:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:38.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:38.420 00:08:38.420 --- 10.0.0.2 ping statistics --- 00:08:38.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.420 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:38.420 12:53:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:38.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:08:38.420 00:08:38.420 --- 10.0.0.3 ping statistics --- 00:08:38.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.420 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:38.420 12:53:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:38.420 00:08:38.420 --- 10.0.0.1 ping statistics --- 00:08:38.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.420 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:38.420 12:53:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.420 12:53:18 -- nvmf/common.sh@421 -- # return 0 00:08:38.420 12:53:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.420 12:53:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.420 12:53:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:38.420 12:53:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:38.420 12:53:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.420 12:53:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:38.420 12:53:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:38.420 12:53:18 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:38.420 12:53:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.420 12:53:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.420 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:08:38.420 12:53:18 -- nvmf/common.sh@469 -- # nvmfpid=73612 00:08:38.420 12:53:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.420 12:53:18 -- nvmf/common.sh@470 -- # waitforlisten 73612 00:08:38.420 12:53:18 -- common/autotest_common.sh@829 -- # '[' -z 73612 ']' 00:08:38.420 12:53:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.420 12:53:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.420 12:53:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.420 12:53:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.420 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:08:38.420 [2024-12-13 12:53:19.051096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:38.420 [2024-12-13 12:53:19.051185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.420 [2024-12-13 12:53:19.194447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.679 [2024-12-13 12:53:19.266882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.679 [2024-12-13 12:53:19.267077] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.679 [2024-12-13 12:53:19.267091] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.679 [2024-12-13 12:53:19.267100] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.679 [2024-12-13 12:53:19.267640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.679 [2024-12-13 12:53:19.267814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.679 [2024-12-13 12:53:19.268167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.679 [2024-12-13 12:53:19.268223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.246 12:53:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.246 12:53:20 -- common/autotest_common.sh@862 -- # return 0 00:08:39.246 12:53:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.246 12:53:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.246 12:53:20 -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 12:53:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:39.507 12:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.507 12:53:20 -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 [2024-12-13 12:53:20.056244] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.507 12:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:39.507 12:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.507 12:53:20 -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 12:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:39.507 12:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.507 12:53:20 -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 12:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:39.507 12:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.507 12:53:20 -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 12:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.507 12:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.507 12:53:20 -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 [2024-12-13 12:53:20.125524] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.507 12:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:39.507 12:53:20 -- target/connect_disconnect.sh@34 -- # set +x 00:08:42.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.462 12:57:04 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:23.462 12:57:04 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:23.462 12:57:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:23.462 12:57:04 -- nvmf/common.sh@116 -- # sync 00:12:23.462 12:57:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:23.462 12:57:04 -- nvmf/common.sh@119 -- # set +e 00:12:23.462 12:57:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:23.462 12:57:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:23.462 rmmod nvme_tcp 00:12:23.462 rmmod nvme_fabrics 00:12:23.462 rmmod nvme_keyring 00:12:23.462 12:57:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:23.462 12:57:04 -- nvmf/common.sh@123 -- # set -e 00:12:23.462 12:57:04 -- nvmf/common.sh@124 -- # return 0 00:12:23.462 12:57:04 -- nvmf/common.sh@477 -- # '[' -n 73612 ']' 00:12:23.462 12:57:04 -- nvmf/common.sh@478 -- # killprocess 73612 00:12:23.462 12:57:04 -- common/autotest_common.sh@936 -- # '[' -z 73612 ']' 00:12:23.462 12:57:04 -- common/autotest_common.sh@940 -- # kill -0 73612 00:12:23.462 12:57:04 -- common/autotest_common.sh@941 -- # uname 00:12:23.462 12:57:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:23.462 12:57:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73612 00:12:23.462 12:57:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:23.462 12:57:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:23.462 killing process with pid 73612 00:12:23.462 12:57:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73612' 00:12:23.462 12:57:04 -- common/autotest_common.sh@955 -- # kill 73612 00:12:23.462 12:57:04 -- common/autotest_common.sh@960 -- # wait 73612 00:12:23.721 12:57:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:23.721 12:57:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:23.721 12:57:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:23.721 12:57:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.721 12:57:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:23.721 12:57:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.721 12:57:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.721 12:57:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.721 12:57:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:23.721 00:12:23.721 real 3m45.958s 00:12:23.721 user 14m35.797s 00:12:23.721 sys 0m26.967s 00:12:23.721 12:57:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:23.721 ************************************ 00:12:23.721 12:57:04 -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 END TEST nvmf_connect_disconnect 00:12:23.721 ************************************ 00:12:23.721 12:57:04 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:23.721 12:57:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:23.721 12:57:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.721 12:57:04 -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 ************************************ 00:12:23.721 START TEST nvmf_multitarget 00:12:23.721 ************************************ 00:12:23.721 12:57:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:23.981 * Looking for test storage... 00:12:23.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:23.981 12:57:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:23.981 12:57:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:23.981 12:57:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:23.981 12:57:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:23.981 12:57:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:23.981 12:57:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:23.981 12:57:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:23.981 12:57:04 -- scripts/common.sh@335 -- # IFS=.-: 00:12:23.981 12:57:04 -- scripts/common.sh@335 -- # read -ra ver1 00:12:23.981 12:57:04 -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.981 12:57:04 -- scripts/common.sh@336 -- # read -ra ver2 00:12:23.981 12:57:04 -- scripts/common.sh@337 -- # local 'op=<' 00:12:23.981 12:57:04 -- scripts/common.sh@339 -- # ver1_l=2 00:12:23.981 12:57:04 -- scripts/common.sh@340 -- # ver2_l=1 00:12:23.981 12:57:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:23.981 12:57:04 -- scripts/common.sh@343 -- # case "$op" in 00:12:23.981 12:57:04 -- scripts/common.sh@344 -- # : 1 00:12:23.981 12:57:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:23.981 12:57:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.981 12:57:04 -- scripts/common.sh@364 -- # decimal 1 00:12:23.981 12:57:04 -- scripts/common.sh@352 -- # local d=1 00:12:23.981 12:57:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.981 12:57:04 -- scripts/common.sh@354 -- # echo 1 00:12:23.981 12:57:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:23.981 12:57:04 -- scripts/common.sh@365 -- # decimal 2 00:12:23.981 12:57:04 -- scripts/common.sh@352 -- # local d=2 00:12:23.981 12:57:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.981 12:57:04 -- scripts/common.sh@354 -- # echo 2 00:12:23.981 12:57:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:23.981 12:57:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:23.981 12:57:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:23.981 12:57:04 -- scripts/common.sh@367 -- # return 0 00:12:23.981 12:57:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.981 12:57:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:23.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.981 --rc genhtml_branch_coverage=1 00:12:23.981 --rc genhtml_function_coverage=1 00:12:23.981 --rc genhtml_legend=1 00:12:23.981 --rc geninfo_all_blocks=1 00:12:23.981 --rc geninfo_unexecuted_blocks=1 00:12:23.981 00:12:23.981 ' 00:12:23.981 12:57:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:23.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.981 --rc genhtml_branch_coverage=1 00:12:23.981 --rc genhtml_function_coverage=1 00:12:23.981 --rc genhtml_legend=1 00:12:23.981 --rc geninfo_all_blocks=1 00:12:23.981 --rc geninfo_unexecuted_blocks=1 00:12:23.981 00:12:23.981 ' 00:12:23.981 12:57:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:23.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.981 --rc genhtml_branch_coverage=1 00:12:23.981 --rc genhtml_function_coverage=1 00:12:23.981 --rc genhtml_legend=1 00:12:23.981 --rc geninfo_all_blocks=1 00:12:23.981 --rc geninfo_unexecuted_blocks=1 00:12:23.981 00:12:23.981 ' 00:12:23.981 12:57:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:23.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.981 --rc genhtml_branch_coverage=1 00:12:23.981 --rc genhtml_function_coverage=1 00:12:23.981 --rc genhtml_legend=1 00:12:23.981 --rc geninfo_all_blocks=1 00:12:23.981 --rc geninfo_unexecuted_blocks=1 00:12:23.981 00:12:23.981 ' 00:12:23.981 12:57:04 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:23.981 12:57:04 -- nvmf/common.sh@7 -- # uname -s 00:12:23.981 12:57:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.981 12:57:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.981 12:57:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.981 12:57:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.981 12:57:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.981 12:57:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.981 12:57:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.981 12:57:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.981 12:57:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.981 12:57:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.981 12:57:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:23.981 12:57:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:23.981 12:57:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.981 12:57:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.981 12:57:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:23.981 12:57:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:23.981 12:57:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.981 12:57:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.981 12:57:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.981 12:57:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.981 12:57:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.981 12:57:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.981 12:57:04 -- paths/export.sh@5 -- # export PATH 00:12:23.981 12:57:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.981 12:57:04 -- nvmf/common.sh@46 -- # : 0 00:12:23.981 12:57:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:23.981 12:57:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:23.981 12:57:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:23.981 12:57:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.981 12:57:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.981 12:57:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:23.982 12:57:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:23.982 12:57:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:23.982 12:57:04 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:23.982 12:57:04 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:23.982 12:57:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:23.982 12:57:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.982 12:57:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:23.982 12:57:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:23.982 12:57:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:23.982 12:57:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.982 12:57:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.982 12:57:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.982 12:57:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:23.982 12:57:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:23.982 12:57:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:23.982 12:57:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:23.982 12:57:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:23.982 12:57:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:23.982 12:57:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.982 12:57:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.982 12:57:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:23.982 12:57:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:23.982 12:57:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:23.982 12:57:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:23.982 12:57:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:23.982 12:57:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.982 12:57:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:23.982 12:57:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:23.982 12:57:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:23.982 12:57:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:23.982 12:57:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:23.982 12:57:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:23.982 Cannot find device "nvmf_tgt_br" 00:12:23.982 12:57:04 -- nvmf/common.sh@154 -- # true 00:12:23.982 12:57:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:23.982 Cannot find device "nvmf_tgt_br2" 00:12:23.982 12:57:04 -- nvmf/common.sh@155 -- # true 00:12:23.982 12:57:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:23.982 12:57:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:23.982 Cannot find device "nvmf_tgt_br" 00:12:23.982 12:57:04 -- nvmf/common.sh@157 -- # true 00:12:23.982 12:57:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:23.982 Cannot find device "nvmf_tgt_br2" 00:12:23.982 12:57:04 -- nvmf/common.sh@158 -- # true 00:12:23.982 12:57:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:24.240 12:57:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:24.240 12:57:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:24.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.240 12:57:04 -- nvmf/common.sh@161 -- # true 00:12:24.240 12:57:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:24.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.240 12:57:04 -- nvmf/common.sh@162 -- # true 00:12:24.240 12:57:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:24.240 12:57:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:24.240 12:57:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:24.240 12:57:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:24.240 12:57:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:24.240 12:57:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:24.240 12:57:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:24.240 12:57:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:24.240 12:57:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:24.240 12:57:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:24.240 12:57:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:24.240 12:57:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:24.240 12:57:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:24.240 12:57:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:24.240 12:57:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:24.240 12:57:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:24.240 12:57:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:24.240 12:57:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:24.240 12:57:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:24.240 12:57:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:24.240 12:57:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:24.240 12:57:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:24.240 12:57:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:24.240 12:57:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:24.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:12:24.240 00:12:24.240 --- 10.0.0.2 ping statistics --- 00:12:24.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.240 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:24.240 12:57:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:24.240 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:24.240 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:24.240 00:12:24.240 --- 10.0.0.3 ping statistics --- 00:12:24.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.240 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:24.240 12:57:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:24.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:12:24.240 00:12:24.240 --- 10.0.0.1 ping statistics --- 00:12:24.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.240 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:24.240 12:57:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.240 12:57:05 -- nvmf/common.sh@421 -- # return 0 00:12:24.240 12:57:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:24.240 12:57:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.240 12:57:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:24.499 12:57:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:24.499 12:57:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.499 12:57:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:24.499 12:57:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:24.499 12:57:05 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:24.499 12:57:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:24.499 12:57:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:24.499 12:57:05 -- common/autotest_common.sh@10 -- # set +x 00:12:24.499 12:57:05 -- nvmf/common.sh@469 -- # nvmfpid=77408 00:12:24.499 12:57:05 -- nvmf/common.sh@470 -- # waitforlisten 77408 00:12:24.499 12:57:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.499 12:57:05 -- common/autotest_common.sh@829 -- # '[' -z 77408 ']' 00:12:24.499 12:57:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.499 12:57:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.499 12:57:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.499 12:57:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.499 12:57:05 -- common/autotest_common.sh@10 -- # set +x 00:12:24.499 [2024-12-13 12:57:05.093905] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:24.499 [2024-12-13 12:57:05.094114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.499 [2024-12-13 12:57:05.232172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.757 [2024-12-13 12:57:05.290593] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:24.757 [2024-12-13 12:57:05.291039] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.757 [2024-12-13 12:57:05.291154] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.757 [2024-12-13 12:57:05.291259] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.757 [2024-12-13 12:57:05.291466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.757 [2024-12-13 12:57:05.291572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.757 [2024-12-13 12:57:05.291694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.757 [2024-12-13 12:57:05.291780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.691 12:57:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.691 12:57:06 -- common/autotest_common.sh@862 -- # return 0 00:12:25.691 12:57:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:25.691 12:57:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:25.691 12:57:06 -- common/autotest_common.sh@10 -- # set +x 00:12:25.691 12:57:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.691 12:57:06 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:25.691 12:57:06 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.691 12:57:06 -- target/multitarget.sh@21 -- # jq length 00:12:25.691 12:57:06 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:25.691 12:57:06 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:25.691 "nvmf_tgt_1" 00:12:25.691 12:57:06 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:25.950 "nvmf_tgt_2" 00:12:25.950 12:57:06 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.950 12:57:06 -- target/multitarget.sh@28 -- # jq length 00:12:25.950 12:57:06 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:25.950 12:57:06 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:26.209 true 00:12:26.209 12:57:06 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:26.209 true 00:12:26.209 12:57:06 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.209 12:57:06 -- target/multitarget.sh@35 -- # jq length 00:12:26.468 12:57:07 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:26.468 12:57:07 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:26.468 12:57:07 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:26.468 12:57:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:26.468 12:57:07 -- nvmf/common.sh@116 -- # sync 00:12:26.468 12:57:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:26.468 12:57:07 -- nvmf/common.sh@119 -- # set +e 00:12:26.468 12:57:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:26.468 12:57:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:26.468 rmmod nvme_tcp 00:12:26.468 rmmod nvme_fabrics 00:12:26.468 rmmod nvme_keyring 00:12:26.468 12:57:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:26.468 12:57:07 -- nvmf/common.sh@123 -- # set -e 00:12:26.468 12:57:07 -- nvmf/common.sh@124 -- # return 0 00:12:26.468 12:57:07 -- nvmf/common.sh@477 -- # '[' -n 77408 ']' 00:12:26.468 12:57:07 -- nvmf/common.sh@478 -- # killprocess 77408 00:12:26.468 12:57:07 -- common/autotest_common.sh@936 -- # '[' -z 77408 ']' 00:12:26.468 12:57:07 -- common/autotest_common.sh@940 -- # kill -0 77408 00:12:26.468 12:57:07 -- common/autotest_common.sh@941 -- # uname 00:12:26.468 12:57:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.468 12:57:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77408 00:12:26.468 killing process with pid 77408 00:12:26.468 12:57:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.468 12:57:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.468 12:57:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77408' 00:12:26.468 12:57:07 -- common/autotest_common.sh@955 -- # kill 77408 00:12:26.468 12:57:07 -- common/autotest_common.sh@960 -- # wait 77408 00:12:26.727 12:57:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:26.727 12:57:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:26.727 12:57:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:26.727 12:57:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.727 12:57:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:26.727 12:57:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.727 12:57:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.727 12:57:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.727 12:57:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:26.727 ************************************ 00:12:26.727 END TEST nvmf_multitarget 00:12:26.727 ************************************ 00:12:26.727 00:12:26.727 real 0m2.968s 00:12:26.727 user 0m9.668s 00:12:26.727 sys 0m0.701s 00:12:26.727 12:57:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:26.727 12:57:07 -- common/autotest_common.sh@10 -- # set +x 00:12:26.727 12:57:07 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.727 12:57:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:26.727 12:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.727 12:57:07 -- common/autotest_common.sh@10 -- # set +x 00:12:26.727 ************************************ 00:12:26.727 START TEST nvmf_rpc 00:12:26.727 ************************************ 00:12:26.727 12:57:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.986 * Looking for test storage... 00:12:26.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:26.986 12:57:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:26.986 12:57:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:26.986 12:57:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:26.986 12:57:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:26.986 12:57:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:26.986 12:57:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:26.986 12:57:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:26.986 12:57:07 -- scripts/common.sh@335 -- # IFS=.-: 00:12:26.986 12:57:07 -- scripts/common.sh@335 -- # read -ra ver1 00:12:26.986 12:57:07 -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.986 12:57:07 -- scripts/common.sh@336 -- # read -ra ver2 00:12:26.986 12:57:07 -- scripts/common.sh@337 -- # local 'op=<' 00:12:26.986 12:57:07 -- scripts/common.sh@339 -- # ver1_l=2 00:12:26.986 12:57:07 -- scripts/common.sh@340 -- # ver2_l=1 00:12:26.986 12:57:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:26.986 12:57:07 -- scripts/common.sh@343 -- # case "$op" in 00:12:26.986 12:57:07 -- scripts/common.sh@344 -- # : 1 00:12:26.986 12:57:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:26.986 12:57:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.986 12:57:07 -- scripts/common.sh@364 -- # decimal 1 00:12:26.986 12:57:07 -- scripts/common.sh@352 -- # local d=1 00:12:26.986 12:57:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.986 12:57:07 -- scripts/common.sh@354 -- # echo 1 00:12:26.986 12:57:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:26.986 12:57:07 -- scripts/common.sh@365 -- # decimal 2 00:12:26.986 12:57:07 -- scripts/common.sh@352 -- # local d=2 00:12:26.986 12:57:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.986 12:57:07 -- scripts/common.sh@354 -- # echo 2 00:12:26.986 12:57:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:26.986 12:57:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:26.986 12:57:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:26.986 12:57:07 -- scripts/common.sh@367 -- # return 0 00:12:26.986 12:57:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.986 12:57:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:26.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.986 --rc genhtml_branch_coverage=1 00:12:26.986 --rc genhtml_function_coverage=1 00:12:26.986 --rc genhtml_legend=1 00:12:26.986 --rc geninfo_all_blocks=1 00:12:26.986 --rc geninfo_unexecuted_blocks=1 00:12:26.986 00:12:26.986 ' 00:12:26.986 12:57:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:26.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.986 --rc genhtml_branch_coverage=1 00:12:26.986 --rc genhtml_function_coverage=1 00:12:26.986 --rc genhtml_legend=1 00:12:26.986 --rc geninfo_all_blocks=1 00:12:26.986 --rc geninfo_unexecuted_blocks=1 00:12:26.986 00:12:26.986 ' 00:12:26.986 12:57:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:26.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.986 --rc genhtml_branch_coverage=1 00:12:26.986 --rc genhtml_function_coverage=1 00:12:26.986 --rc genhtml_legend=1 00:12:26.986 --rc geninfo_all_blocks=1 00:12:26.986 --rc geninfo_unexecuted_blocks=1 00:12:26.986 00:12:26.986 ' 00:12:26.986 12:57:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:26.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.986 --rc genhtml_branch_coverage=1 00:12:26.986 --rc genhtml_function_coverage=1 00:12:26.986 --rc genhtml_legend=1 00:12:26.986 --rc geninfo_all_blocks=1 00:12:26.986 --rc geninfo_unexecuted_blocks=1 00:12:26.986 00:12:26.986 ' 00:12:26.986 12:57:07 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:26.986 12:57:07 -- nvmf/common.sh@7 -- # uname -s 00:12:26.986 12:57:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.986 12:57:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.986 12:57:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.986 12:57:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.986 12:57:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.986 12:57:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.986 12:57:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.986 12:57:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.986 12:57:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.986 12:57:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.986 12:57:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:26.986 12:57:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:26.986 12:57:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.986 12:57:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.986 12:57:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:26.986 12:57:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:26.986 12:57:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.986 12:57:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.986 12:57:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.986 12:57:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.986 12:57:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.986 12:57:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.986 12:57:07 -- paths/export.sh@5 -- # export PATH 00:12:26.986 12:57:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.986 12:57:07 -- nvmf/common.sh@46 -- # : 0 00:12:26.987 12:57:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:26.987 12:57:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:26.987 12:57:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:26.987 12:57:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.987 12:57:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.987 12:57:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:26.987 12:57:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:26.987 12:57:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:26.987 12:57:07 -- target/rpc.sh@11 -- # loops=5 00:12:26.987 12:57:07 -- target/rpc.sh@23 -- # nvmftestinit 00:12:26.987 12:57:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:26.987 12:57:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.987 12:57:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:26.987 12:57:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:26.987 12:57:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:26.987 12:57:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.987 12:57:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.987 12:57:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.987 12:57:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:26.987 12:57:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:26.987 12:57:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:26.987 12:57:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:26.987 12:57:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:26.987 12:57:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:26.987 12:57:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.987 12:57:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.987 12:57:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:26.987 12:57:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:26.987 12:57:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:26.987 12:57:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:26.987 12:57:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:26.987 12:57:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.987 12:57:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:26.987 12:57:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:26.987 12:57:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:26.987 12:57:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:26.987 12:57:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:26.987 12:57:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:26.987 Cannot find device "nvmf_tgt_br" 00:12:26.987 12:57:07 -- nvmf/common.sh@154 -- # true 00:12:26.987 12:57:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:26.987 Cannot find device "nvmf_tgt_br2" 00:12:26.987 12:57:07 -- nvmf/common.sh@155 -- # true 00:12:26.987 12:57:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:26.987 12:57:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:26.987 Cannot find device "nvmf_tgt_br" 00:12:26.987 12:57:07 -- nvmf/common.sh@157 -- # true 00:12:26.987 12:57:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:26.987 Cannot find device "nvmf_tgt_br2" 00:12:26.987 12:57:07 -- nvmf/common.sh@158 -- # true 00:12:26.987 12:57:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:27.246 12:57:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:27.246 12:57:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.246 12:57:07 -- nvmf/common.sh@161 -- # true 00:12:27.246 12:57:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.246 12:57:07 -- nvmf/common.sh@162 -- # true 00:12:27.246 12:57:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:27.246 12:57:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:27.246 12:57:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:27.246 12:57:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:27.246 12:57:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.246 12:57:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.246 12:57:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.246 12:57:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:27.246 12:57:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:27.246 12:57:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:27.246 12:57:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:27.246 12:57:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:27.246 12:57:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:27.246 12:57:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:27.246 12:57:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:27.246 12:57:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:27.246 12:57:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:27.246 12:57:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:27.246 12:57:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:27.246 12:57:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:27.246 12:57:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:27.246 12:57:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:27.246 12:57:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:27.246 12:57:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:27.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:12:27.246 00:12:27.246 --- 10.0.0.2 ping statistics --- 00:12:27.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.246 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:12:27.246 12:57:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:27.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:27.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:12:27.246 00:12:27.246 --- 10.0.0.3 ping statistics --- 00:12:27.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.246 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:27.246 12:57:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:27.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:27.505 00:12:27.505 --- 10.0.0.1 ping statistics --- 00:12:27.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.505 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:27.505 12:57:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.505 12:57:08 -- nvmf/common.sh@421 -- # return 0 00:12:27.505 12:57:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:27.505 12:57:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.505 12:57:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:27.505 12:57:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:27.505 12:57:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.505 12:57:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:27.505 12:57:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:27.505 12:57:08 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:27.505 12:57:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:27.505 12:57:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:27.505 12:57:08 -- common/autotest_common.sh@10 -- # set +x 00:12:27.505 12:57:08 -- nvmf/common.sh@469 -- # nvmfpid=77649 00:12:27.505 12:57:08 -- nvmf/common.sh@470 -- # waitforlisten 77649 00:12:27.505 12:57:08 -- common/autotest_common.sh@829 -- # '[' -z 77649 ']' 00:12:27.505 12:57:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.505 12:57:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.505 12:57:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.505 12:57:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.505 12:57:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.505 12:57:08 -- common/autotest_common.sh@10 -- # set +x 00:12:27.505 [2024-12-13 12:57:08.101240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:27.505 [2024-12-13 12:57:08.101326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.505 [2024-12-13 12:57:08.238716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.765 [2024-12-13 12:57:08.299254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:27.765 [2024-12-13 12:57:08.299393] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.765 [2024-12-13 12:57:08.299404] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.765 [2024-12-13 12:57:08.299412] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.765 [2024-12-13 12:57:08.299537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.765 [2024-12-13 12:57:08.300610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.765 [2024-12-13 12:57:08.300799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.765 [2024-12-13 12:57:08.300805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.700 12:57:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.700 12:57:09 -- common/autotest_common.sh@862 -- # return 0 00:12:28.700 12:57:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:28.700 12:57:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.700 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.700 12:57:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.700 12:57:09 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:28.700 12:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.700 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.700 12:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.700 12:57:09 -- target/rpc.sh@26 -- # stats='{ 00:12:28.700 "poll_groups": [ 00:12:28.700 { 00:12:28.700 "admin_qpairs": 0, 00:12:28.700 "completed_nvme_io": 0, 00:12:28.700 "current_admin_qpairs": 0, 00:12:28.700 "current_io_qpairs": 0, 00:12:28.700 "io_qpairs": 0, 00:12:28.700 "name": "nvmf_tgt_poll_group_0", 00:12:28.700 "pending_bdev_io": 0, 00:12:28.700 "transports": [] 00:12:28.700 }, 00:12:28.700 { 00:12:28.700 "admin_qpairs": 0, 00:12:28.700 "completed_nvme_io": 0, 00:12:28.700 "current_admin_qpairs": 0, 00:12:28.700 "current_io_qpairs": 0, 00:12:28.700 "io_qpairs": 0, 00:12:28.700 "name": "nvmf_tgt_poll_group_1", 00:12:28.700 "pending_bdev_io": 0, 00:12:28.700 "transports": [] 00:12:28.700 }, 00:12:28.700 { 00:12:28.700 "admin_qpairs": 0, 00:12:28.700 "completed_nvme_io": 0, 00:12:28.700 "current_admin_qpairs": 0, 00:12:28.700 "current_io_qpairs": 0, 00:12:28.700 "io_qpairs": 0, 00:12:28.700 "name": "nvmf_tgt_poll_group_2", 00:12:28.700 "pending_bdev_io": 0, 00:12:28.700 "transports": [] 00:12:28.700 }, 00:12:28.701 { 00:12:28.701 "admin_qpairs": 0, 00:12:28.701 "completed_nvme_io": 0, 00:12:28.701 "current_admin_qpairs": 0, 00:12:28.701 "current_io_qpairs": 0, 00:12:28.701 "io_qpairs": 0, 00:12:28.701 "name": "nvmf_tgt_poll_group_3", 00:12:28.701 "pending_bdev_io": 0, 00:12:28.701 "transports": [] 00:12:28.701 } 00:12:28.701 ], 00:12:28.701 "tick_rate": 2200000000 00:12:28.701 }' 00:12:28.701 12:57:09 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:28.701 12:57:09 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:28.701 12:57:09 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:28.701 12:57:09 -- target/rpc.sh@15 -- # wc -l 00:12:28.701 12:57:09 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:28.701 12:57:09 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:28.701 12:57:09 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:28.701 12:57:09 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.701 12:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.701 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.701 [2024-12-13 12:57:09.301054] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.701 12:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.701 12:57:09 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:28.701 12:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.701 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.701 12:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.701 12:57:09 -- target/rpc.sh@33 -- # stats='{ 00:12:28.701 "poll_groups": [ 00:12:28.701 { 00:12:28.701 "admin_qpairs": 0, 00:12:28.701 "completed_nvme_io": 0, 00:12:28.701 "current_admin_qpairs": 0, 00:12:28.701 "current_io_qpairs": 0, 00:12:28.701 "io_qpairs": 0, 00:12:28.701 "name": "nvmf_tgt_poll_group_0", 00:12:28.701 "pending_bdev_io": 0, 00:12:28.701 "transports": [ 00:12:28.701 { 00:12:28.701 "trtype": "TCP" 00:12:28.701 } 00:12:28.701 ] 00:12:28.701 }, 00:12:28.701 { 00:12:28.701 "admin_qpairs": 0, 00:12:28.701 "completed_nvme_io": 0, 00:12:28.701 "current_admin_qpairs": 0, 00:12:28.701 "current_io_qpairs": 0, 00:12:28.701 "io_qpairs": 0, 00:12:28.701 "name": "nvmf_tgt_poll_group_1", 00:12:28.701 "pending_bdev_io": 0, 00:12:28.701 "transports": [ 00:12:28.701 { 00:12:28.701 "trtype": "TCP" 00:12:28.701 } 00:12:28.701 ] 00:12:28.701 }, 00:12:28.701 { 00:12:28.701 "admin_qpairs": 0, 00:12:28.701 "completed_nvme_io": 0, 00:12:28.701 "current_admin_qpairs": 0, 00:12:28.701 "current_io_qpairs": 0, 00:12:28.701 "io_qpairs": 0, 00:12:28.701 "name": "nvmf_tgt_poll_group_2", 00:12:28.701 "pending_bdev_io": 0, 00:12:28.701 "transports": [ 00:12:28.701 { 00:12:28.701 "trtype": "TCP" 00:12:28.701 } 00:12:28.701 ] 00:12:28.701 }, 00:12:28.701 { 00:12:28.701 "admin_qpairs": 0, 00:12:28.701 "completed_nvme_io": 0, 00:12:28.701 "current_admin_qpairs": 0, 00:12:28.701 "current_io_qpairs": 0, 00:12:28.701 "io_qpairs": 0, 00:12:28.701 "name": "nvmf_tgt_poll_group_3", 00:12:28.701 "pending_bdev_io": 0, 00:12:28.701 "transports": [ 00:12:28.701 { 00:12:28.701 "trtype": "TCP" 00:12:28.701 } 00:12:28.701 ] 00:12:28.701 } 00:12:28.701 ], 00:12:28.701 "tick_rate": 2200000000 00:12:28.701 }' 00:12:28.701 12:57:09 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:28.701 12:57:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:28.701 12:57:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:28.701 12:57:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:28.701 12:57:09 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:28.701 12:57:09 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:28.701 12:57:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:28.701 12:57:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:28.701 12:57:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:28.701 12:57:09 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:28.701 12:57:09 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:28.701 12:57:09 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:28.701 12:57:09 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:28.701 12:57:09 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:28.701 12:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.701 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.701 Malloc1 00:12:28.701 12:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.701 12:57:09 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.701 12:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.701 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.960 12:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.960 12:57:09 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:28.960 12:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.960 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.960 12:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.960 12:57:09 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:28.960 12:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.960 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.960 12:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.960 12:57:09 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.960 12:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.960 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.960 [2024-12-13 12:57:09.502818] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.960 12:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.960 12:57:09 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 -a 10.0.0.2 -s 4420 00:12:28.960 12:57:09 -- common/autotest_common.sh@650 -- # local es=0 00:12:28.960 12:57:09 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 -a 10.0.0.2 -s 4420 00:12:28.960 12:57:09 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:28.960 12:57:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.960 12:57:09 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:28.960 12:57:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.960 12:57:09 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:28.960 12:57:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.960 12:57:09 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:28.960 12:57:09 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:28.960 12:57:09 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 -a 10.0.0.2 -s 4420 00:12:28.960 [2024-12-13 12:57:09.531126] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29' 00:12:28.960 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:28.960 could not add new controller: failed to write to nvme-fabrics device 00:12:28.960 12:57:09 -- common/autotest_common.sh@653 -- # es=1 00:12:28.960 12:57:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:28.960 12:57:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:28.960 12:57:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:28.960 12:57:09 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:28.960 12:57:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.960 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.960 12:57:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.960 12:57:09 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.960 12:57:09 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.960 12:57:09 -- common/autotest_common.sh@1187 -- # local i=0 00:12:28.960 12:57:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.960 12:57:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:28.960 12:57:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:31.492 12:57:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:31.492 12:57:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:31.492 12:57:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.492 12:57:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:31.492 12:57:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.492 12:57:11 -- common/autotest_common.sh@1197 -- # return 0 00:12:31.492 12:57:11 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.492 12:57:11 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.492 12:57:11 -- common/autotest_common.sh@1208 -- # local i=0 00:12:31.492 12:57:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:31.492 12:57:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.492 12:57:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.492 12:57:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:31.492 12:57:11 -- common/autotest_common.sh@1220 -- # return 0 00:12:31.492 12:57:11 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:31.492 12:57:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.492 12:57:11 -- common/autotest_common.sh@10 -- # set +x 00:12:31.492 12:57:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.492 12:57:11 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.492 12:57:11 -- common/autotest_common.sh@650 -- # local es=0 00:12:31.492 12:57:11 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.492 12:57:11 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:31.492 12:57:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.492 12:57:11 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:31.492 12:57:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.492 12:57:11 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:31.492 12:57:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.492 12:57:11 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:31.492 12:57:11 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:31.492 12:57:11 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.492 [2024-12-13 12:57:11.832174] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29' 00:12:31.492 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:31.492 could not add new controller: failed to write to nvme-fabrics device 00:12:31.492 12:57:11 -- common/autotest_common.sh@653 -- # es=1 00:12:31.492 12:57:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.492 12:57:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.492 12:57:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.492 12:57:11 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:31.492 12:57:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.492 12:57:11 -- common/autotest_common.sh@10 -- # set +x 00:12:31.492 12:57:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.492 12:57:11 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.492 12:57:12 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.492 12:57:12 -- common/autotest_common.sh@1187 -- # local i=0 00:12:31.492 12:57:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.492 12:57:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:31.492 12:57:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:33.397 12:57:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:33.397 12:57:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:33.397 12:57:14 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.397 12:57:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:33.397 12:57:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.397 12:57:14 -- common/autotest_common.sh@1197 -- # return 0 00:12:33.397 12:57:14 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.397 12:57:14 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.397 12:57:14 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.397 12:57:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.397 12:57:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.397 12:57:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.397 12:57:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.397 12:57:14 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.397 12:57:14 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.397 12:57:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.397 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 12:57:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.397 12:57:14 -- target/rpc.sh@81 -- # seq 1 5 00:12:33.397 12:57:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.397 12:57:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.397 12:57:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.397 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 12:57:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.397 12:57:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.397 12:57:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.397 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 [2024-12-13 12:57:14.121435] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.397 12:57:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.397 12:57:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.397 12:57:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.397 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 12:57:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.397 12:57:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.397 12:57:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.397 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 12:57:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.397 12:57:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.656 12:57:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.656 12:57:14 -- common/autotest_common.sh@1187 -- # local i=0 00:12:33.656 12:57:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.656 12:57:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:33.656 12:57:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:35.558 12:57:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:35.558 12:57:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:35.558 12:57:16 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.816 12:57:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:35.816 12:57:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.816 12:57:16 -- common/autotest_common.sh@1197 -- # return 0 00:12:35.816 12:57:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.816 12:57:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.817 12:57:16 -- common/autotest_common.sh@1208 -- # local i=0 00:12:35.817 12:57:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:35.817 12:57:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.817 12:57:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:35.817 12:57:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.817 12:57:16 -- common/autotest_common.sh@1220 -- # return 0 00:12:35.817 12:57:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.817 12:57:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:57:16 -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 12:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.817 12:57:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.817 12:57:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:57:16 -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 12:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.817 12:57:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.817 12:57:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.817 12:57:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:57:16 -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 12:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.817 12:57:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.817 12:57:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:57:16 -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 [2024-12-13 12:57:16.428458] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.817 12:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.817 12:57:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.817 12:57:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:57:16 -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 12:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.817 12:57:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.817 12:57:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:57:16 -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 12:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.817 12:57:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.076 12:57:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.076 12:57:16 -- common/autotest_common.sh@1187 -- # local i=0 00:12:36.076 12:57:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.076 12:57:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:36.076 12:57:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:38.006 12:57:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:38.006 12:57:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:38.006 12:57:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.006 12:57:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:38.006 12:57:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.006 12:57:18 -- common/autotest_common.sh@1197 -- # return 0 00:12:38.006 12:57:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.006 12:57:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.006 12:57:18 -- common/autotest_common.sh@1208 -- # local i=0 00:12:38.006 12:57:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:38.006 12:57:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.006 12:57:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.006 12:57:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:38.006 12:57:18 -- common/autotest_common.sh@1220 -- # return 0 00:12:38.006 12:57:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.006 12:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.006 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 12:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.006 12:57:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.006 12:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.006 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 12:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.006 12:57:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.006 12:57:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.006 12:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.006 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 12:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.006 12:57:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.006 12:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.006 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 [2024-12-13 12:57:18.743416] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.006 12:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.006 12:57:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.006 12:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.006 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 12:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.006 12:57:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.006 12:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.006 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 12:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.006 12:57:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.265 12:57:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.265 12:57:18 -- common/autotest_common.sh@1187 -- # local i=0 00:12:38.265 12:57:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.265 12:57:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:38.265 12:57:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:40.171 12:57:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:40.171 12:57:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:40.171 12:57:20 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.430 12:57:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:40.430 12:57:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.430 12:57:20 -- common/autotest_common.sh@1197 -- # return 0 00:12:40.430 12:57:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.430 12:57:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.430 12:57:20 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.430 12:57:20 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.430 12:57:20 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.430 12:57:21 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.430 12:57:21 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.430 12:57:21 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.430 12:57:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.430 12:57:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.430 12:57:21 -- common/autotest_common.sh@10 -- # set +x 00:12:40.430 12:57:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.430 12:57:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.430 12:57:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.430 12:57:21 -- common/autotest_common.sh@10 -- # set +x 00:12:40.430 12:57:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.430 12:57:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.430 12:57:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.430 12:57:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.430 12:57:21 -- common/autotest_common.sh@10 -- # set +x 00:12:40.430 12:57:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.430 12:57:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.430 12:57:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.430 12:57:21 -- common/autotest_common.sh@10 -- # set +x 00:12:40.430 [2024-12-13 12:57:21.054509] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.430 12:57:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.430 12:57:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.430 12:57:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.430 12:57:21 -- common/autotest_common.sh@10 -- # set +x 00:12:40.430 12:57:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.430 12:57:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.430 12:57:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.430 12:57:21 -- common/autotest_common.sh@10 -- # set +x 00:12:40.430 12:57:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.430 12:57:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.689 12:57:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.689 12:57:21 -- common/autotest_common.sh@1187 -- # local i=0 00:12:40.689 12:57:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.689 12:57:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:40.689 12:57:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:42.593 12:57:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:42.593 12:57:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:42.593 12:57:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.593 12:57:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:42.593 12:57:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.593 12:57:23 -- common/autotest_common.sh@1197 -- # return 0 00:12:42.593 12:57:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.852 12:57:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.852 12:57:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:42.852 12:57:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:42.852 12:57:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.852 12:57:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:42.852 12:57:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.852 12:57:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:42.852 12:57:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.852 12:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.852 12:57:23 -- common/autotest_common.sh@10 -- # set +x 00:12:42.852 12:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.852 12:57:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.852 12:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.852 12:57:23 -- common/autotest_common.sh@10 -- # set +x 00:12:42.852 12:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.852 12:57:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.852 12:57:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.852 12:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.852 12:57:23 -- common/autotest_common.sh@10 -- # set +x 00:12:42.852 12:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.852 12:57:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.852 12:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.852 12:57:23 -- common/autotest_common.sh@10 -- # set +x 00:12:42.852 [2024-12-13 12:57:23.458276] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.852 12:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.852 12:57:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.852 12:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.852 12:57:23 -- common/autotest_common.sh@10 -- # set +x 00:12:42.852 12:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.852 12:57:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.852 12:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.852 12:57:23 -- common/autotest_common.sh@10 -- # set +x 00:12:42.852 12:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.852 12:57:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.111 12:57:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.111 12:57:23 -- common/autotest_common.sh@1187 -- # local i=0 00:12:43.111 12:57:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.111 12:57:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:43.111 12:57:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:45.022 12:57:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:45.022 12:57:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:45.022 12:57:25 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.022 12:57:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:45.022 12:57:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.022 12:57:25 -- common/autotest_common.sh@1197 -- # return 0 00:12:45.022 12:57:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.022 12:57:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.022 12:57:25 -- common/autotest_common.sh@1208 -- # local i=0 00:12:45.022 12:57:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:45.022 12:57:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.022 12:57:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.022 12:57:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:45.022 12:57:25 -- common/autotest_common.sh@1220 -- # return 0 00:12:45.022 12:57:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.022 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.022 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.022 12:57:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.022 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.022 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.022 12:57:25 -- target/rpc.sh@99 -- # seq 1 5 00:12:45.022 12:57:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.022 12:57:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.022 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.022 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.022 12:57:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.022 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.022 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 [2024-12-13 12:57:25.785296] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.022 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.022 12:57:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.022 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.022 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.281 12:57:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 [2024-12-13 12:57:25.833319] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.281 12:57:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 [2024-12-13 12:57:25.885364] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.281 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.281 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.281 12:57:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.281 12:57:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 [2024-12-13 12:57:25.933435] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.282 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.282 12:57:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 [2024-12-13 12:57:25.981474] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.282 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.282 12:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:26 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.282 12:57:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:26 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.282 12:57:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:26 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:45.282 12:57:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.282 12:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:45.282 12:57:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.282 12:57:26 -- target/rpc.sh@110 -- # stats='{ 00:12:45.282 "poll_groups": [ 00:12:45.282 { 00:12:45.282 "admin_qpairs": 2, 00:12:45.282 "completed_nvme_io": 164, 00:12:45.282 "current_admin_qpairs": 0, 00:12:45.282 "current_io_qpairs": 0, 00:12:45.282 "io_qpairs": 16, 00:12:45.282 "name": "nvmf_tgt_poll_group_0", 00:12:45.282 "pending_bdev_io": 0, 00:12:45.282 "transports": [ 00:12:45.282 { 00:12:45.282 "trtype": "TCP" 00:12:45.282 } 00:12:45.282 ] 00:12:45.282 }, 00:12:45.282 { 00:12:45.282 "admin_qpairs": 3, 00:12:45.282 "completed_nvme_io": 67, 00:12:45.282 "current_admin_qpairs": 0, 00:12:45.282 "current_io_qpairs": 0, 00:12:45.282 "io_qpairs": 17, 00:12:45.282 "name": "nvmf_tgt_poll_group_1", 00:12:45.282 "pending_bdev_io": 0, 00:12:45.282 "transports": [ 00:12:45.282 { 00:12:45.282 "trtype": "TCP" 00:12:45.282 } 00:12:45.282 ] 00:12:45.282 }, 00:12:45.282 { 00:12:45.282 "admin_qpairs": 1, 00:12:45.282 "completed_nvme_io": 71, 00:12:45.282 "current_admin_qpairs": 0, 00:12:45.282 "current_io_qpairs": 0, 00:12:45.282 "io_qpairs": 19, 00:12:45.282 "name": "nvmf_tgt_poll_group_2", 00:12:45.282 "pending_bdev_io": 0, 00:12:45.282 "transports": [ 00:12:45.282 { 00:12:45.282 "trtype": "TCP" 00:12:45.282 } 00:12:45.282 ] 00:12:45.282 }, 00:12:45.282 { 00:12:45.282 "admin_qpairs": 1, 00:12:45.282 "completed_nvme_io": 118, 00:12:45.282 "current_admin_qpairs": 0, 00:12:45.282 "current_io_qpairs": 0, 00:12:45.282 "io_qpairs": 18, 00:12:45.282 "name": "nvmf_tgt_poll_group_3", 00:12:45.282 "pending_bdev_io": 0, 00:12:45.282 "transports": [ 00:12:45.282 { 00:12:45.282 "trtype": "TCP" 00:12:45.282 } 00:12:45.282 ] 00:12:45.282 } 00:12:45.282 ], 00:12:45.282 "tick_rate": 2200000000 00:12:45.282 }' 00:12:45.282 12:57:26 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:45.282 12:57:26 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:45.282 12:57:26 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:45.282 12:57:26 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.541 12:57:26 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:45.541 12:57:26 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:45.541 12:57:26 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:45.541 12:57:26 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:45.541 12:57:26 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.541 12:57:26 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:45.541 12:57:26 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:45.541 12:57:26 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:45.541 12:57:26 -- target/rpc.sh@123 -- # nvmftestfini 00:12:45.541 12:57:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:45.541 12:57:26 -- nvmf/common.sh@116 -- # sync 00:12:45.541 12:57:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:45.541 12:57:26 -- nvmf/common.sh@119 -- # set +e 00:12:45.541 12:57:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:45.541 12:57:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:45.541 rmmod nvme_tcp 00:12:45.541 rmmod nvme_fabrics 00:12:45.541 rmmod nvme_keyring 00:12:45.541 12:57:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:45.541 12:57:26 -- nvmf/common.sh@123 -- # set -e 00:12:45.541 12:57:26 -- nvmf/common.sh@124 -- # return 0 00:12:45.541 12:57:26 -- nvmf/common.sh@477 -- # '[' -n 77649 ']' 00:12:45.541 12:57:26 -- nvmf/common.sh@478 -- # killprocess 77649 00:12:45.541 12:57:26 -- common/autotest_common.sh@936 -- # '[' -z 77649 ']' 00:12:45.541 12:57:26 -- common/autotest_common.sh@940 -- # kill -0 77649 00:12:45.541 12:57:26 -- common/autotest_common.sh@941 -- # uname 00:12:45.541 12:57:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:45.541 12:57:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77649 00:12:45.541 12:57:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:45.541 12:57:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:45.541 12:57:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77649' 00:12:45.541 killing process with pid 77649 00:12:45.541 12:57:26 -- common/autotest_common.sh@955 -- # kill 77649 00:12:45.541 12:57:26 -- common/autotest_common.sh@960 -- # wait 77649 00:12:45.800 12:57:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:45.800 12:57:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:45.800 12:57:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:45.800 12:57:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.800 12:57:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:45.800 12:57:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.800 12:57:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.800 12:57:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.800 12:57:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:45.800 00:12:45.800 real 0m19.032s 00:12:45.800 user 1m11.410s 00:12:45.800 sys 0m2.745s 00:12:45.800 12:57:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.800 ************************************ 00:12:45.800 END TEST nvmf_rpc 00:12:45.800 12:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:45.800 ************************************ 00:12:45.800 12:57:26 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:45.800 12:57:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:45.800 12:57:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.800 12:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:45.800 ************************************ 00:12:45.800 START TEST nvmf_invalid 00:12:45.800 ************************************ 00:12:45.800 12:57:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.060 * Looking for test storage... 00:12:46.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:46.060 12:57:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:46.060 12:57:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:46.060 12:57:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:46.060 12:57:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:46.060 12:57:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:46.060 12:57:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:46.060 12:57:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:46.060 12:57:26 -- scripts/common.sh@335 -- # IFS=.-: 00:12:46.060 12:57:26 -- scripts/common.sh@335 -- # read -ra ver1 00:12:46.060 12:57:26 -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.060 12:57:26 -- scripts/common.sh@336 -- # read -ra ver2 00:12:46.060 12:57:26 -- scripts/common.sh@337 -- # local 'op=<' 00:12:46.060 12:57:26 -- scripts/common.sh@339 -- # ver1_l=2 00:12:46.060 12:57:26 -- scripts/common.sh@340 -- # ver2_l=1 00:12:46.060 12:57:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:46.060 12:57:26 -- scripts/common.sh@343 -- # case "$op" in 00:12:46.060 12:57:26 -- scripts/common.sh@344 -- # : 1 00:12:46.060 12:57:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:46.060 12:57:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.060 12:57:26 -- scripts/common.sh@364 -- # decimal 1 00:12:46.060 12:57:26 -- scripts/common.sh@352 -- # local d=1 00:12:46.060 12:57:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.060 12:57:26 -- scripts/common.sh@354 -- # echo 1 00:12:46.060 12:57:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:46.060 12:57:26 -- scripts/common.sh@365 -- # decimal 2 00:12:46.060 12:57:26 -- scripts/common.sh@352 -- # local d=2 00:12:46.060 12:57:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.060 12:57:26 -- scripts/common.sh@354 -- # echo 2 00:12:46.060 12:57:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:46.060 12:57:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:46.060 12:57:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:46.060 12:57:26 -- scripts/common.sh@367 -- # return 0 00:12:46.060 12:57:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.060 12:57:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:46.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.060 --rc genhtml_branch_coverage=1 00:12:46.060 --rc genhtml_function_coverage=1 00:12:46.060 --rc genhtml_legend=1 00:12:46.060 --rc geninfo_all_blocks=1 00:12:46.060 --rc geninfo_unexecuted_blocks=1 00:12:46.060 00:12:46.060 ' 00:12:46.060 12:57:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:46.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.060 --rc genhtml_branch_coverage=1 00:12:46.060 --rc genhtml_function_coverage=1 00:12:46.060 --rc genhtml_legend=1 00:12:46.060 --rc geninfo_all_blocks=1 00:12:46.060 --rc geninfo_unexecuted_blocks=1 00:12:46.060 00:12:46.060 ' 00:12:46.060 12:57:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:46.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.060 --rc genhtml_branch_coverage=1 00:12:46.060 --rc genhtml_function_coverage=1 00:12:46.060 --rc genhtml_legend=1 00:12:46.060 --rc geninfo_all_blocks=1 00:12:46.060 --rc geninfo_unexecuted_blocks=1 00:12:46.060 00:12:46.060 ' 00:12:46.060 12:57:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:46.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.060 --rc genhtml_branch_coverage=1 00:12:46.060 --rc genhtml_function_coverage=1 00:12:46.060 --rc genhtml_legend=1 00:12:46.060 --rc geninfo_all_blocks=1 00:12:46.060 --rc geninfo_unexecuted_blocks=1 00:12:46.060 00:12:46.060 ' 00:12:46.060 12:57:26 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:46.060 12:57:26 -- nvmf/common.sh@7 -- # uname -s 00:12:46.060 12:57:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.060 12:57:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.060 12:57:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.060 12:57:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.060 12:57:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.060 12:57:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.060 12:57:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.060 12:57:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.060 12:57:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.060 12:57:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.060 12:57:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:46.060 12:57:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:46.060 12:57:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.060 12:57:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.060 12:57:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:46.060 12:57:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.060 12:57:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.060 12:57:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.060 12:57:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.060 12:57:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.060 12:57:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.060 12:57:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.060 12:57:26 -- paths/export.sh@5 -- # export PATH 00:12:46.060 12:57:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.060 12:57:26 -- nvmf/common.sh@46 -- # : 0 00:12:46.060 12:57:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:46.060 12:57:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:46.060 12:57:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:46.060 12:57:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.060 12:57:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.060 12:57:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:46.060 12:57:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:46.060 12:57:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:46.060 12:57:26 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.060 12:57:26 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.060 12:57:26 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:46.060 12:57:26 -- target/invalid.sh@14 -- # target=foobar 00:12:46.060 12:57:26 -- target/invalid.sh@16 -- # RANDOM=0 00:12:46.060 12:57:26 -- target/invalid.sh@34 -- # nvmftestinit 00:12:46.060 12:57:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:46.060 12:57:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.060 12:57:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:46.060 12:57:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:46.060 12:57:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:46.060 12:57:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.060 12:57:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.060 12:57:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.060 12:57:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:46.060 12:57:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:46.060 12:57:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:46.060 12:57:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:46.060 12:57:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:46.060 12:57:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:46.060 12:57:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.060 12:57:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.060 12:57:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:46.060 12:57:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:46.060 12:57:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:46.060 12:57:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:46.060 12:57:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:46.060 12:57:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.060 12:57:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:46.060 12:57:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:46.060 12:57:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:46.060 12:57:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:46.060 12:57:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:46.060 12:57:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:46.060 Cannot find device "nvmf_tgt_br" 00:12:46.060 12:57:26 -- nvmf/common.sh@154 -- # true 00:12:46.060 12:57:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:46.060 Cannot find device "nvmf_tgt_br2" 00:12:46.060 12:57:26 -- nvmf/common.sh@155 -- # true 00:12:46.060 12:57:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:46.060 12:57:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:46.060 Cannot find device "nvmf_tgt_br" 00:12:46.060 12:57:26 -- nvmf/common.sh@157 -- # true 00:12:46.061 12:57:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:46.061 Cannot find device "nvmf_tgt_br2" 00:12:46.061 12:57:26 -- nvmf/common.sh@158 -- # true 00:12:46.061 12:57:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:46.319 12:57:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:46.319 12:57:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:46.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.319 12:57:26 -- nvmf/common.sh@161 -- # true 00:12:46.319 12:57:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:46.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.319 12:57:26 -- nvmf/common.sh@162 -- # true 00:12:46.319 12:57:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:46.319 12:57:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:46.319 12:57:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:46.319 12:57:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:46.319 12:57:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:46.319 12:57:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:46.319 12:57:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:46.319 12:57:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:46.319 12:57:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:46.319 12:57:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:46.319 12:57:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:46.319 12:57:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:46.319 12:57:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:46.319 12:57:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:46.319 12:57:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:46.319 12:57:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:46.319 12:57:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:46.319 12:57:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:46.319 12:57:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:46.319 12:57:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:46.319 12:57:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:46.578 12:57:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:46.578 12:57:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:46.578 12:57:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:46.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:12:46.578 00:12:46.578 --- 10.0.0.2 ping statistics --- 00:12:46.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.578 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:46.578 12:57:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:46.578 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:46.578 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:12:46.578 00:12:46.578 --- 10.0.0.3 ping statistics --- 00:12:46.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.578 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:46.578 12:57:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:46.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:46.578 00:12:46.578 --- 10.0.0.1 ping statistics --- 00:12:46.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.578 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:46.578 12:57:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.578 12:57:27 -- nvmf/common.sh@421 -- # return 0 00:12:46.578 12:57:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:46.578 12:57:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.578 12:57:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:46.578 12:57:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:46.578 12:57:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.578 12:57:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:46.578 12:57:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:46.578 12:57:27 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:46.578 12:57:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:46.578 12:57:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.578 12:57:27 -- common/autotest_common.sh@10 -- # set +x 00:12:46.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.578 12:57:27 -- nvmf/common.sh@469 -- # nvmfpid=78168 00:12:46.578 12:57:27 -- nvmf/common.sh@470 -- # waitforlisten 78168 00:12:46.578 12:57:27 -- common/autotest_common.sh@829 -- # '[' -z 78168 ']' 00:12:46.578 12:57:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.578 12:57:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.578 12:57:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.578 12:57:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.578 12:57:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.578 12:57:27 -- common/autotest_common.sh@10 -- # set +x 00:12:46.578 [2024-12-13 12:57:27.192511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:46.579 [2024-12-13 12:57:27.192609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.579 [2024-12-13 12:57:27.325121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.837 [2024-12-13 12:57:27.400986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:46.837 [2024-12-13 12:57:27.401124] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.837 [2024-12-13 12:57:27.401136] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.837 [2024-12-13 12:57:27.401144] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.837 [2024-12-13 12:57:27.401304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.837 [2024-12-13 12:57:27.401425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.838 [2024-12-13 12:57:27.401954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.838 [2024-12-13 12:57:27.401962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.405 12:57:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.405 12:57:28 -- common/autotest_common.sh@862 -- # return 0 00:12:47.405 12:57:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:47.405 12:57:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.405 12:57:28 -- common/autotest_common.sh@10 -- # set +x 00:12:47.663 12:57:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.663 12:57:28 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:47.663 12:57:28 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7733 00:12:47.922 [2024-12-13 12:57:28.453900] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:47.922 12:57:28 -- target/invalid.sh@40 -- # out='2024/12/13 12:57:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7733 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:47.922 request: 00:12:47.922 { 00:12:47.922 "method": "nvmf_create_subsystem", 00:12:47.922 "params": { 00:12:47.922 "nqn": "nqn.2016-06.io.spdk:cnode7733", 00:12:47.922 "tgt_name": "foobar" 00:12:47.922 } 00:12:47.922 } 00:12:47.922 Got JSON-RPC error response 00:12:47.922 GoRPCClient: error on JSON-RPC call' 00:12:47.922 12:57:28 -- target/invalid.sh@41 -- # [[ 2024/12/13 12:57:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7733 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:47.922 request: 00:12:47.922 { 00:12:47.922 "method": "nvmf_create_subsystem", 00:12:47.922 "params": { 00:12:47.922 "nqn": "nqn.2016-06.io.spdk:cnode7733", 00:12:47.922 "tgt_name": "foobar" 00:12:47.922 } 00:12:47.922 } 00:12:47.923 Got JSON-RPC error response 00:12:47.923 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:47.923 12:57:28 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:47.923 12:57:28 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6656 00:12:48.181 [2024-12-13 12:57:28.746290] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6656: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:48.181 12:57:28 -- target/invalid.sh@45 -- # out='2024/12/13 12:57:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6656 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:48.181 request: 00:12:48.181 { 00:12:48.181 "method": "nvmf_create_subsystem", 00:12:48.181 "params": { 00:12:48.181 "nqn": "nqn.2016-06.io.spdk:cnode6656", 00:12:48.181 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:48.181 } 00:12:48.181 } 00:12:48.181 Got JSON-RPC error response 00:12:48.181 GoRPCClient: error on JSON-RPC call' 00:12:48.181 12:57:28 -- target/invalid.sh@46 -- # [[ 2024/12/13 12:57:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6656 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:48.181 request: 00:12:48.181 { 00:12:48.181 "method": "nvmf_create_subsystem", 00:12:48.181 "params": { 00:12:48.181 "nqn": "nqn.2016-06.io.spdk:cnode6656", 00:12:48.181 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:48.181 } 00:12:48.181 } 00:12:48.181 Got JSON-RPC error response 00:12:48.181 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:48.181 12:57:28 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:48.181 12:57:28 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode130 00:12:48.440 [2024-12-13 12:57:28.978433] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode130: invalid model number 'SPDK_Controller' 00:12:48.440 12:57:29 -- target/invalid.sh@50 -- # out='2024/12/13 12:57:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode130], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:48.440 request: 00:12:48.440 { 00:12:48.440 "method": "nvmf_create_subsystem", 00:12:48.441 "params": { 00:12:48.441 "nqn": "nqn.2016-06.io.spdk:cnode130", 00:12:48.441 "model_number": "SPDK_Controller\u001f" 00:12:48.441 } 00:12:48.441 } 00:12:48.441 Got JSON-RPC error response 00:12:48.441 GoRPCClient: error on JSON-RPC call' 00:12:48.441 12:57:29 -- target/invalid.sh@51 -- # [[ 2024/12/13 12:57:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode130], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:48.441 request: 00:12:48.441 { 00:12:48.441 "method": "nvmf_create_subsystem", 00:12:48.441 "params": { 00:12:48.441 "nqn": "nqn.2016-06.io.spdk:cnode130", 00:12:48.441 "model_number": "SPDK_Controller\u001f" 00:12:48.441 } 00:12:48.441 } 00:12:48.441 Got JSON-RPC error response 00:12:48.441 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:48.441 12:57:29 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:48.441 12:57:29 -- target/invalid.sh@19 -- # local length=21 ll 00:12:48.441 12:57:29 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:48.441 12:57:29 -- target/invalid.sh@21 -- # local chars 00:12:48.441 12:57:29 -- target/invalid.sh@22 -- # local string 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 73 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=I 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 99 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=c 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 103 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=g 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 54 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=6 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 111 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=o 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 66 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=B 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 96 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+='`' 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 43 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=+ 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 89 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=Y 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 124 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+='|' 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 95 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=_ 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 86 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=V 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 91 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+='[' 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 34 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+='"' 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 37 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=% 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 77 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=M 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 71 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=G 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 104 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=h 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 127 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 103 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+=g 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # printf %x 91 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:48.441 12:57:29 -- target/invalid.sh@25 -- # string+='[' 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.441 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.441 12:57:29 -- target/invalid.sh@28 -- # [[ I == \- ]] 00:12:48.441 12:57:29 -- target/invalid.sh@31 -- # echo 'Icg6oB`+Y|_V["%MGhg[' 00:12:48.441 12:57:29 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Icg6oB`+Y|_V["%MGhg[' nqn.2016-06.io.spdk:cnode14565 00:12:48.701 [2024-12-13 12:57:29.366766] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14565: invalid serial number 'Icg6oB`+Y|_V["%MGhg[' 00:12:48.701 12:57:29 -- target/invalid.sh@54 -- # out='2024/12/13 12:57:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14565 serial_number:Icg6oB`+Y|_V["%MGhg[], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Icg6oB`+Y|_V["%MGhg[ 00:12:48.701 request: 00:12:48.701 { 00:12:48.701 "method": "nvmf_create_subsystem", 00:12:48.701 "params": { 00:12:48.701 "nqn": "nqn.2016-06.io.spdk:cnode14565", 00:12:48.701 "serial_number": "Icg6oB`+Y|_V[\"%MGh\u007fg[" 00:12:48.701 } 00:12:48.701 } 00:12:48.701 Got JSON-RPC error response 00:12:48.701 GoRPCClient: error on JSON-RPC call' 00:12:48.701 12:57:29 -- target/invalid.sh@55 -- # [[ 2024/12/13 12:57:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14565 serial_number:Icg6oB`+Y|_V["%MGhg[], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Icg6oB`+Y|_V["%MGhg[ 00:12:48.701 request: 00:12:48.701 { 00:12:48.701 "method": "nvmf_create_subsystem", 00:12:48.701 "params": { 00:12:48.701 "nqn": "nqn.2016-06.io.spdk:cnode14565", 00:12:48.701 "serial_number": "Icg6oB`+Y|_V[\"%MGh\u007fg[" 00:12:48.701 } 00:12:48.701 } 00:12:48.701 Got JSON-RPC error response 00:12:48.701 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:48.701 12:57:29 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:48.701 12:57:29 -- target/invalid.sh@19 -- # local length=41 ll 00:12:48.701 12:57:29 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:48.701 12:57:29 -- target/invalid.sh@21 -- # local chars 00:12:48.701 12:57:29 -- target/invalid.sh@22 -- # local string 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 116 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=t 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 79 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=O 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 49 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=1 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 117 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=u 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 62 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+='>' 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 55 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=7 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 113 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=q 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 65 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=A 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 120 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=x 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 116 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=t 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 81 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=Q 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 108 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=l 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 69 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=E 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 111 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=o 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 95 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+=_ 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 125 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+='}' 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # printf %x 92 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:48.701 12:57:29 -- target/invalid.sh@25 -- # string+='\' 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.701 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 34 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+='"' 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 122 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=z 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 85 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=U 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 103 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=g 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 115 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=s 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 69 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=E 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 102 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=f 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 64 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=@ 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 67 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=C 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 104 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=h 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 119 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=w 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 117 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=u 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 53 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=5 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 37 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=% 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 87 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=W 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 41 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=')' 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 81 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=Q 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 118 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=v 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 115 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=s 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 75 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=K 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 89 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=Y 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 38 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+='&' 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 48 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=0 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # printf %x 121 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:48.961 12:57:29 -- target/invalid.sh@25 -- # string+=y 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.961 12:57:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.961 12:57:29 -- target/invalid.sh@28 -- # [[ t == \- ]] 00:12:48.962 12:57:29 -- target/invalid.sh@31 -- # echo 'tO1u>7qAxtQlEo_}\"zUgsEf@Chwu5%W)QvsKY&0y' 00:12:48.962 12:57:29 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'tO1u>7qAxtQlEo_}\"zUgsEf@Chwu5%W)QvsKY&0y' nqn.2016-06.io.spdk:cnode7587 00:12:49.220 [2024-12-13 12:57:29.851187] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7587: invalid model number 'tO1u>7qAxtQlEo_}\"zUgsEf@Chwu5%W)QvsKY&0y' 00:12:49.220 12:57:29 -- target/invalid.sh@58 -- # out='2024/12/13 12:57:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:tO1u>7qAxtQlEo_}\"zUgsEf@Chwu5%W)QvsKY&0y nqn:nqn.2016-06.io.spdk:cnode7587], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN tO1u>7qAxtQlEo_}\"zUgsEf@Chwu5%W)QvsKY&0y 00:12:49.220 request: 00:12:49.220 { 00:12:49.220 "method": "nvmf_create_subsystem", 00:12:49.220 "params": { 00:12:49.220 "nqn": "nqn.2016-06.io.spdk:cnode7587", 00:12:49.220 "model_number": "tO1u>7qAxtQlEo_}\\\"zUgsEf@Chwu5%W)QvsKY&0y" 00:12:49.220 } 00:12:49.220 } 00:12:49.220 Got JSON-RPC error response 00:12:49.220 GoRPCClient: error on JSON-RPC call' 00:12:49.220 12:57:29 -- target/invalid.sh@59 -- # [[ 2024/12/13 12:57:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:tO1u>7qAxtQlEo_}\"zUgsEf@Chwu5%W)QvsKY&0y nqn:nqn.2016-06.io.spdk:cnode7587], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN tO1u>7qAxtQlEo_}\"zUgsEf@Chwu5%W)QvsKY&0y 00:12:49.220 request: 00:12:49.220 { 00:12:49.220 "method": "nvmf_create_subsystem", 00:12:49.220 "params": { 00:12:49.220 "nqn": "nqn.2016-06.io.spdk:cnode7587", 00:12:49.220 "model_number": "tO1u>7qAxtQlEo_}\\\"zUgsEf@Chwu5%W)QvsKY&0y" 00:12:49.220 } 00:12:49.220 } 00:12:49.220 Got JSON-RPC error response 00:12:49.220 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:49.220 12:57:29 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:49.479 [2024-12-13 12:57:30.147523] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.479 12:57:30 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:49.738 12:57:30 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:49.738 12:57:30 -- target/invalid.sh@67 -- # echo '' 00:12:49.738 12:57:30 -- target/invalid.sh@67 -- # head -n 1 00:12:49.738 12:57:30 -- target/invalid.sh@67 -- # IP= 00:12:49.738 12:57:30 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:49.996 [2024-12-13 12:57:30.720894] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:49.996 12:57:30 -- target/invalid.sh@69 -- # out='2024/12/13 12:57:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:49.996 request: 00:12:49.996 { 00:12:49.996 "method": "nvmf_subsystem_remove_listener", 00:12:49.996 "params": { 00:12:49.996 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:49.996 "listen_address": { 00:12:49.996 "trtype": "tcp", 00:12:49.996 "traddr": "", 00:12:49.996 "trsvcid": "4421" 00:12:49.996 } 00:12:49.996 } 00:12:49.996 } 00:12:49.996 Got JSON-RPC error response 00:12:49.996 GoRPCClient: error on JSON-RPC call' 00:12:49.996 12:57:30 -- target/invalid.sh@70 -- # [[ 2024/12/13 12:57:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:49.996 request: 00:12:49.996 { 00:12:49.996 "method": "nvmf_subsystem_remove_listener", 00:12:49.996 "params": { 00:12:49.996 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:49.996 "listen_address": { 00:12:49.996 "trtype": "tcp", 00:12:49.996 "traddr": "", 00:12:49.996 "trsvcid": "4421" 00:12:49.996 } 00:12:49.996 } 00:12:49.996 } 00:12:49.996 Got JSON-RPC error response 00:12:49.996 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:49.996 12:57:30 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23586 -i 0 00:12:50.254 [2024-12-13 12:57:31.009110] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23586: invalid cntlid range [0-65519] 00:12:50.512 12:57:31 -- target/invalid.sh@73 -- # out='2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode23586], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:50.512 request: 00:12:50.512 { 00:12:50.512 "method": "nvmf_create_subsystem", 00:12:50.512 "params": { 00:12:50.512 "nqn": "nqn.2016-06.io.spdk:cnode23586", 00:12:50.512 "min_cntlid": 0 00:12:50.512 } 00:12:50.512 } 00:12:50.512 Got JSON-RPC error response 00:12:50.512 GoRPCClient: error on JSON-RPC call' 00:12:50.512 12:57:31 -- target/invalid.sh@74 -- # [[ 2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode23586], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:50.512 request: 00:12:50.512 { 00:12:50.512 "method": "nvmf_create_subsystem", 00:12:50.512 "params": { 00:12:50.512 "nqn": "nqn.2016-06.io.spdk:cnode23586", 00:12:50.512 "min_cntlid": 0 00:12:50.512 } 00:12:50.512 } 00:12:50.512 Got JSON-RPC error response 00:12:50.512 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:50.512 12:57:31 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6875 -i 65520 00:12:50.512 [2024-12-13 12:57:31.241280] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6875: invalid cntlid range [65520-65519] 00:12:50.512 12:57:31 -- target/invalid.sh@75 -- # out='2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode6875], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:50.512 request: 00:12:50.512 { 00:12:50.512 "method": "nvmf_create_subsystem", 00:12:50.512 "params": { 00:12:50.512 "nqn": "nqn.2016-06.io.spdk:cnode6875", 00:12:50.512 "min_cntlid": 65520 00:12:50.512 } 00:12:50.512 } 00:12:50.512 Got JSON-RPC error response 00:12:50.512 GoRPCClient: error on JSON-RPC call' 00:12:50.512 12:57:31 -- target/invalid.sh@76 -- # [[ 2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode6875], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:50.512 request: 00:12:50.512 { 00:12:50.513 "method": "nvmf_create_subsystem", 00:12:50.513 "params": { 00:12:50.513 "nqn": "nqn.2016-06.io.spdk:cnode6875", 00:12:50.513 "min_cntlid": 65520 00:12:50.513 } 00:12:50.513 } 00:12:50.513 Got JSON-RPC error response 00:12:50.513 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:50.513 12:57:31 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28625 -I 0 00:12:50.771 [2024-12-13 12:57:31.513542] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28625: invalid cntlid range [1-0] 00:12:50.771 12:57:31 -- target/invalid.sh@77 -- # out='2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode28625], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:50.771 request: 00:12:50.771 { 00:12:50.771 "method": "nvmf_create_subsystem", 00:12:50.771 "params": { 00:12:50.771 "nqn": "nqn.2016-06.io.spdk:cnode28625", 00:12:50.771 "max_cntlid": 0 00:12:50.771 } 00:12:50.771 } 00:12:50.771 Got JSON-RPC error response 00:12:50.771 GoRPCClient: error on JSON-RPC call' 00:12:50.771 12:57:31 -- target/invalid.sh@78 -- # [[ 2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode28625], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:50.771 request: 00:12:50.771 { 00:12:50.771 "method": "nvmf_create_subsystem", 00:12:50.771 "params": { 00:12:50.771 "nqn": "nqn.2016-06.io.spdk:cnode28625", 00:12:50.771 "max_cntlid": 0 00:12:50.771 } 00:12:50.771 } 00:12:50.771 Got JSON-RPC error response 00:12:50.771 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:50.771 12:57:31 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11122 -I 65520 00:12:51.030 [2024-12-13 12:57:31.741726] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11122: invalid cntlid range [1-65520] 00:12:51.030 12:57:31 -- target/invalid.sh@79 -- # out='2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11122], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:51.030 request: 00:12:51.030 { 00:12:51.030 "method": "nvmf_create_subsystem", 00:12:51.030 "params": { 00:12:51.030 "nqn": "nqn.2016-06.io.spdk:cnode11122", 00:12:51.030 "max_cntlid": 65520 00:12:51.030 } 00:12:51.030 } 00:12:51.030 Got JSON-RPC error response 00:12:51.030 GoRPCClient: error on JSON-RPC call' 00:12:51.030 12:57:31 -- target/invalid.sh@80 -- # [[ 2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11122], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:51.030 request: 00:12:51.030 { 00:12:51.030 "method": "nvmf_create_subsystem", 00:12:51.030 "params": { 00:12:51.030 "nqn": "nqn.2016-06.io.spdk:cnode11122", 00:12:51.030 "max_cntlid": 65520 00:12:51.030 } 00:12:51.030 } 00:12:51.030 Got JSON-RPC error response 00:12:51.030 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.030 12:57:31 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28249 -i 6 -I 5 00:12:51.289 [2024-12-13 12:57:31.969982] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28249: invalid cntlid range [6-5] 00:12:51.289 12:57:31 -- target/invalid.sh@83 -- # out='2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28249], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:51.289 request: 00:12:51.289 { 00:12:51.289 "method": "nvmf_create_subsystem", 00:12:51.289 "params": { 00:12:51.289 "nqn": "nqn.2016-06.io.spdk:cnode28249", 00:12:51.289 "min_cntlid": 6, 00:12:51.289 "max_cntlid": 5 00:12:51.289 } 00:12:51.289 } 00:12:51.289 Got JSON-RPC error response 00:12:51.289 GoRPCClient: error on JSON-RPC call' 00:12:51.289 12:57:31 -- target/invalid.sh@84 -- # [[ 2024/12/13 12:57:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28249], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:51.289 request: 00:12:51.289 { 00:12:51.289 "method": "nvmf_create_subsystem", 00:12:51.289 "params": { 00:12:51.289 "nqn": "nqn.2016-06.io.spdk:cnode28249", 00:12:51.289 "min_cntlid": 6, 00:12:51.289 "max_cntlid": 5 00:12:51.289 } 00:12:51.289 } 00:12:51.289 Got JSON-RPC error response 00:12:51.289 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.289 12:57:31 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:51.548 12:57:32 -- target/invalid.sh@87 -- # out='request: 00:12:51.548 { 00:12:51.548 "name": "foobar", 00:12:51.548 "method": "nvmf_delete_target", 00:12:51.548 "req_id": 1 00:12:51.548 } 00:12:51.548 Got JSON-RPC error response 00:12:51.548 response: 00:12:51.548 { 00:12:51.548 "code": -32602, 00:12:51.548 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:51.548 }' 00:12:51.548 12:57:32 -- target/invalid.sh@88 -- # [[ request: 00:12:51.548 { 00:12:51.548 "name": "foobar", 00:12:51.548 "method": "nvmf_delete_target", 00:12:51.548 "req_id": 1 00:12:51.548 } 00:12:51.548 Got JSON-RPC error response 00:12:51.548 response: 00:12:51.548 { 00:12:51.548 "code": -32602, 00:12:51.548 "message": "The specified target doesn't exist, cannot delete it." 00:12:51.548 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:51.548 12:57:32 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:51.548 12:57:32 -- target/invalid.sh@91 -- # nvmftestfini 00:12:51.548 12:57:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:51.548 12:57:32 -- nvmf/common.sh@116 -- # sync 00:12:51.548 12:57:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:51.548 12:57:32 -- nvmf/common.sh@119 -- # set +e 00:12:51.548 12:57:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:51.548 12:57:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:51.548 rmmod nvme_tcp 00:12:51.548 rmmod nvme_fabrics 00:12:51.548 rmmod nvme_keyring 00:12:51.548 12:57:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:51.548 12:57:32 -- nvmf/common.sh@123 -- # set -e 00:12:51.548 12:57:32 -- nvmf/common.sh@124 -- # return 0 00:12:51.549 12:57:32 -- nvmf/common.sh@477 -- # '[' -n 78168 ']' 00:12:51.549 12:57:32 -- nvmf/common.sh@478 -- # killprocess 78168 00:12:51.549 12:57:32 -- common/autotest_common.sh@936 -- # '[' -z 78168 ']' 00:12:51.549 12:57:32 -- common/autotest_common.sh@940 -- # kill -0 78168 00:12:51.549 12:57:32 -- common/autotest_common.sh@941 -- # uname 00:12:51.549 12:57:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:51.549 12:57:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78168 00:12:51.549 12:57:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:51.549 killing process with pid 78168 00:12:51.549 12:57:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:51.549 12:57:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78168' 00:12:51.549 12:57:32 -- common/autotest_common.sh@955 -- # kill 78168 00:12:51.549 12:57:32 -- common/autotest_common.sh@960 -- # wait 78168 00:12:51.807 12:57:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:51.807 12:57:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:51.807 12:57:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:51.807 12:57:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.807 12:57:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:51.807 12:57:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.807 12:57:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.807 12:57:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.807 12:57:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:51.808 ************************************ 00:12:51.808 END TEST nvmf_invalid 00:12:51.808 ************************************ 00:12:51.808 00:12:51.808 real 0m5.879s 00:12:51.808 user 0m23.389s 00:12:51.808 sys 0m1.260s 00:12:51.808 12:57:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:51.808 12:57:32 -- common/autotest_common.sh@10 -- # set +x 00:12:51.808 12:57:32 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:51.808 12:57:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:51.808 12:57:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.808 12:57:32 -- common/autotest_common.sh@10 -- # set +x 00:12:51.808 ************************************ 00:12:51.808 START TEST nvmf_abort 00:12:51.808 ************************************ 00:12:51.808 12:57:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:51.808 * Looking for test storage... 00:12:51.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:51.808 12:57:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:51.808 12:57:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:51.808 12:57:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:52.067 12:57:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:52.067 12:57:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:52.067 12:57:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:52.067 12:57:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:52.067 12:57:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:52.067 12:57:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:52.067 12:57:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.067 12:57:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:52.067 12:57:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:52.067 12:57:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:52.067 12:57:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:52.067 12:57:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:52.067 12:57:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:52.067 12:57:32 -- scripts/common.sh@344 -- # : 1 00:12:52.067 12:57:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:52.067 12:57:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.067 12:57:32 -- scripts/common.sh@364 -- # decimal 1 00:12:52.067 12:57:32 -- scripts/common.sh@352 -- # local d=1 00:12:52.067 12:57:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.067 12:57:32 -- scripts/common.sh@354 -- # echo 1 00:12:52.067 12:57:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:52.067 12:57:32 -- scripts/common.sh@365 -- # decimal 2 00:12:52.067 12:57:32 -- scripts/common.sh@352 -- # local d=2 00:12:52.067 12:57:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.067 12:57:32 -- scripts/common.sh@354 -- # echo 2 00:12:52.067 12:57:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:52.067 12:57:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:52.067 12:57:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:52.067 12:57:32 -- scripts/common.sh@367 -- # return 0 00:12:52.067 12:57:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.067 12:57:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:52.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.067 --rc genhtml_branch_coverage=1 00:12:52.067 --rc genhtml_function_coverage=1 00:12:52.067 --rc genhtml_legend=1 00:12:52.067 --rc geninfo_all_blocks=1 00:12:52.067 --rc geninfo_unexecuted_blocks=1 00:12:52.067 00:12:52.067 ' 00:12:52.067 12:57:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:52.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.067 --rc genhtml_branch_coverage=1 00:12:52.067 --rc genhtml_function_coverage=1 00:12:52.067 --rc genhtml_legend=1 00:12:52.067 --rc geninfo_all_blocks=1 00:12:52.067 --rc geninfo_unexecuted_blocks=1 00:12:52.067 00:12:52.067 ' 00:12:52.067 12:57:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:52.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.067 --rc genhtml_branch_coverage=1 00:12:52.067 --rc genhtml_function_coverage=1 00:12:52.067 --rc genhtml_legend=1 00:12:52.067 --rc geninfo_all_blocks=1 00:12:52.067 --rc geninfo_unexecuted_blocks=1 00:12:52.067 00:12:52.067 ' 00:12:52.067 12:57:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:52.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.067 --rc genhtml_branch_coverage=1 00:12:52.067 --rc genhtml_function_coverage=1 00:12:52.067 --rc genhtml_legend=1 00:12:52.067 --rc geninfo_all_blocks=1 00:12:52.067 --rc geninfo_unexecuted_blocks=1 00:12:52.067 00:12:52.067 ' 00:12:52.067 12:57:32 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:52.067 12:57:32 -- nvmf/common.sh@7 -- # uname -s 00:12:52.067 12:57:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.067 12:57:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.067 12:57:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.067 12:57:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.067 12:57:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.067 12:57:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.067 12:57:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.067 12:57:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.067 12:57:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.067 12:57:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.067 12:57:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:52.067 12:57:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:52.067 12:57:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.067 12:57:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.067 12:57:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:52.067 12:57:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.067 12:57:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.067 12:57:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.067 12:57:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.067 12:57:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.067 12:57:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.067 12:57:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.067 12:57:32 -- paths/export.sh@5 -- # export PATH 00:12:52.067 12:57:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.067 12:57:32 -- nvmf/common.sh@46 -- # : 0 00:12:52.067 12:57:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:52.067 12:57:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:52.067 12:57:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:52.067 12:57:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.067 12:57:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.067 12:57:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:52.067 12:57:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:52.067 12:57:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:52.067 12:57:32 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.068 12:57:32 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:52.068 12:57:32 -- target/abort.sh@14 -- # nvmftestinit 00:12:52.068 12:57:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:52.068 12:57:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.068 12:57:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:52.068 12:57:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:52.068 12:57:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:52.068 12:57:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.068 12:57:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.068 12:57:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.068 12:57:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:52.068 12:57:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:52.068 12:57:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:52.068 12:57:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:52.068 12:57:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:52.068 12:57:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:52.068 12:57:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.068 12:57:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.068 12:57:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:52.068 12:57:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:52.068 12:57:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:52.068 12:57:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:52.068 12:57:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:52.068 12:57:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.068 12:57:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:52.068 12:57:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:52.068 12:57:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:52.068 12:57:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:52.068 12:57:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:52.068 12:57:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:52.068 Cannot find device "nvmf_tgt_br" 00:12:52.068 12:57:32 -- nvmf/common.sh@154 -- # true 00:12:52.068 12:57:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:52.068 Cannot find device "nvmf_tgt_br2" 00:12:52.068 12:57:32 -- nvmf/common.sh@155 -- # true 00:12:52.068 12:57:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:52.068 12:57:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:52.068 Cannot find device "nvmf_tgt_br" 00:12:52.068 12:57:32 -- nvmf/common.sh@157 -- # true 00:12:52.068 12:57:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:52.068 Cannot find device "nvmf_tgt_br2" 00:12:52.068 12:57:32 -- nvmf/common.sh@158 -- # true 00:12:52.068 12:57:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:52.068 12:57:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:52.068 12:57:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:52.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.068 12:57:32 -- nvmf/common.sh@161 -- # true 00:12:52.068 12:57:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:52.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.068 12:57:32 -- nvmf/common.sh@162 -- # true 00:12:52.068 12:57:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:52.068 12:57:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:52.068 12:57:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:52.068 12:57:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:52.068 12:57:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:52.068 12:57:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:52.327 12:57:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:52.327 12:57:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:52.327 12:57:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:52.327 12:57:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:52.327 12:57:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:52.327 12:57:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:52.327 12:57:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:52.327 12:57:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:52.327 12:57:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:52.327 12:57:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:52.327 12:57:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:52.327 12:57:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:52.327 12:57:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:52.327 12:57:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:52.327 12:57:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:52.327 12:57:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:52.327 12:57:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:52.327 12:57:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:52.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:12:52.327 00:12:52.327 --- 10.0.0.2 ping statistics --- 00:12:52.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.327 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:52.327 12:57:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:52.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:52.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:12:52.327 00:12:52.327 --- 10.0.0.3 ping statistics --- 00:12:52.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.327 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:52.327 12:57:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:52.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:52.327 00:12:52.327 --- 10.0.0.1 ping statistics --- 00:12:52.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.327 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:52.327 12:57:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.327 12:57:32 -- nvmf/common.sh@421 -- # return 0 00:12:52.327 12:57:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:52.327 12:57:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.327 12:57:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:52.327 12:57:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:52.327 12:57:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.327 12:57:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:52.327 12:57:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:52.327 12:57:33 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:52.327 12:57:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:52.327 12:57:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:52.327 12:57:33 -- common/autotest_common.sh@10 -- # set +x 00:12:52.327 12:57:33 -- nvmf/common.sh@469 -- # nvmfpid=78685 00:12:52.327 12:57:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:52.327 12:57:33 -- nvmf/common.sh@470 -- # waitforlisten 78685 00:12:52.328 12:57:33 -- common/autotest_common.sh@829 -- # '[' -z 78685 ']' 00:12:52.328 12:57:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.328 12:57:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.328 12:57:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.328 12:57:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.328 12:57:33 -- common/autotest_common.sh@10 -- # set +x 00:12:52.328 [2024-12-13 12:57:33.057086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:52.328 [2024-12-13 12:57:33.057168] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.587 [2024-12-13 12:57:33.184899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.587 [2024-12-13 12:57:33.252527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:52.587 [2024-12-13 12:57:33.252673] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.587 [2024-12-13 12:57:33.252684] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.587 [2024-12-13 12:57:33.252693] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.587 [2024-12-13 12:57:33.253186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.587 [2024-12-13 12:57:33.253573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.587 [2024-12-13 12:57:33.253602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.523 12:57:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.523 12:57:33 -- common/autotest_common.sh@862 -- # return 0 00:12:53.523 12:57:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:53.523 12:57:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:53.523 12:57:33 -- common/autotest_common.sh@10 -- # set +x 00:12:53.523 12:57:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.523 12:57:33 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:53.523 12:57:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.523 12:57:33 -- common/autotest_common.sh@10 -- # set +x 00:12:53.523 [2024-12-13 12:57:33.994629] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.523 12:57:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.523 12:57:33 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:53.523 12:57:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.523 12:57:33 -- common/autotest_common.sh@10 -- # set +x 00:12:53.523 Malloc0 00:12:53.523 12:57:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.523 12:57:34 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:53.523 12:57:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.523 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:12:53.523 Delay0 00:12:53.523 12:57:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.523 12:57:34 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:53.523 12:57:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.524 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:12:53.524 12:57:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.524 12:57:34 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:53.524 12:57:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.524 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:12:53.524 12:57:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.524 12:57:34 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:53.524 12:57:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.524 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:12:53.524 [2024-12-13 12:57:34.066221] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.524 12:57:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.524 12:57:34 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:53.524 12:57:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.524 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:12:53.524 12:57:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.524 12:57:34 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:53.524 [2024-12-13 12:57:34.245857] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:56.060 Initializing NVMe Controllers 00:12:56.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:56.060 controller IO queue size 128 less than required 00:12:56.060 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:56.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:56.060 Initialization complete. Launching workers. 00:12:56.060 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37301 00:12:56.060 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37362, failed to submit 62 00:12:56.060 success 37301, unsuccess 61, failed 0 00:12:56.060 12:57:36 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:56.060 12:57:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.060 12:57:36 -- common/autotest_common.sh@10 -- # set +x 00:12:56.060 12:57:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.060 12:57:36 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:56.060 12:57:36 -- target/abort.sh@38 -- # nvmftestfini 00:12:56.060 12:57:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:56.060 12:57:36 -- nvmf/common.sh@116 -- # sync 00:12:56.060 12:57:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:56.060 12:57:36 -- nvmf/common.sh@119 -- # set +e 00:12:56.060 12:57:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:56.060 12:57:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:56.060 rmmod nvme_tcp 00:12:56.060 rmmod nvme_fabrics 00:12:56.060 rmmod nvme_keyring 00:12:56.060 12:57:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:56.060 12:57:36 -- nvmf/common.sh@123 -- # set -e 00:12:56.060 12:57:36 -- nvmf/common.sh@124 -- # return 0 00:12:56.060 12:57:36 -- nvmf/common.sh@477 -- # '[' -n 78685 ']' 00:12:56.060 12:57:36 -- nvmf/common.sh@478 -- # killprocess 78685 00:12:56.060 12:57:36 -- common/autotest_common.sh@936 -- # '[' -z 78685 ']' 00:12:56.061 12:57:36 -- common/autotest_common.sh@940 -- # kill -0 78685 00:12:56.061 12:57:36 -- common/autotest_common.sh@941 -- # uname 00:12:56.061 12:57:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:56.061 12:57:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78685 00:12:56.061 killing process with pid 78685 00:12:56.061 12:57:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:56.061 12:57:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:56.061 12:57:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78685' 00:12:56.061 12:57:36 -- common/autotest_common.sh@955 -- # kill 78685 00:12:56.061 12:57:36 -- common/autotest_common.sh@960 -- # wait 78685 00:12:56.061 12:57:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:56.061 12:57:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:56.061 12:57:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:56.061 12:57:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.061 12:57:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:56.061 12:57:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.061 12:57:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.061 12:57:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.061 12:57:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:56.061 00:12:56.061 real 0m4.184s 00:12:56.061 user 0m12.069s 00:12:56.061 sys 0m0.950s 00:12:56.061 12:57:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:56.061 12:57:36 -- common/autotest_common.sh@10 -- # set +x 00:12:56.061 ************************************ 00:12:56.061 END TEST nvmf_abort 00:12:56.061 ************************************ 00:12:56.061 12:57:36 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:56.061 12:57:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:56.061 12:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:56.061 12:57:36 -- common/autotest_common.sh@10 -- # set +x 00:12:56.061 ************************************ 00:12:56.061 START TEST nvmf_ns_hotplug_stress 00:12:56.061 ************************************ 00:12:56.061 12:57:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:56.061 * Looking for test storage... 00:12:56.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:56.061 12:57:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:56.061 12:57:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:56.061 12:57:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:56.320 12:57:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:56.320 12:57:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:56.320 12:57:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:56.320 12:57:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:56.320 12:57:36 -- scripts/common.sh@335 -- # IFS=.-: 00:12:56.320 12:57:36 -- scripts/common.sh@335 -- # read -ra ver1 00:12:56.320 12:57:36 -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.320 12:57:36 -- scripts/common.sh@336 -- # read -ra ver2 00:12:56.320 12:57:36 -- scripts/common.sh@337 -- # local 'op=<' 00:12:56.320 12:57:36 -- scripts/common.sh@339 -- # ver1_l=2 00:12:56.320 12:57:36 -- scripts/common.sh@340 -- # ver2_l=1 00:12:56.320 12:57:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:56.320 12:57:36 -- scripts/common.sh@343 -- # case "$op" in 00:12:56.320 12:57:36 -- scripts/common.sh@344 -- # : 1 00:12:56.320 12:57:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:56.320 12:57:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.320 12:57:36 -- scripts/common.sh@364 -- # decimal 1 00:12:56.320 12:57:36 -- scripts/common.sh@352 -- # local d=1 00:12:56.320 12:57:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.320 12:57:36 -- scripts/common.sh@354 -- # echo 1 00:12:56.320 12:57:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:56.320 12:57:36 -- scripts/common.sh@365 -- # decimal 2 00:12:56.320 12:57:36 -- scripts/common.sh@352 -- # local d=2 00:12:56.320 12:57:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.320 12:57:36 -- scripts/common.sh@354 -- # echo 2 00:12:56.320 12:57:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:56.320 12:57:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:56.320 12:57:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:56.320 12:57:36 -- scripts/common.sh@367 -- # return 0 00:12:56.320 12:57:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.320 12:57:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:56.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.320 --rc genhtml_branch_coverage=1 00:12:56.320 --rc genhtml_function_coverage=1 00:12:56.320 --rc genhtml_legend=1 00:12:56.320 --rc geninfo_all_blocks=1 00:12:56.320 --rc geninfo_unexecuted_blocks=1 00:12:56.320 00:12:56.320 ' 00:12:56.320 12:57:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:56.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.320 --rc genhtml_branch_coverage=1 00:12:56.320 --rc genhtml_function_coverage=1 00:12:56.320 --rc genhtml_legend=1 00:12:56.320 --rc geninfo_all_blocks=1 00:12:56.320 --rc geninfo_unexecuted_blocks=1 00:12:56.320 00:12:56.320 ' 00:12:56.320 12:57:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:56.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.320 --rc genhtml_branch_coverage=1 00:12:56.320 --rc genhtml_function_coverage=1 00:12:56.320 --rc genhtml_legend=1 00:12:56.320 --rc geninfo_all_blocks=1 00:12:56.320 --rc geninfo_unexecuted_blocks=1 00:12:56.320 00:12:56.320 ' 00:12:56.320 12:57:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:56.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.320 --rc genhtml_branch_coverage=1 00:12:56.320 --rc genhtml_function_coverage=1 00:12:56.320 --rc genhtml_legend=1 00:12:56.320 --rc geninfo_all_blocks=1 00:12:56.320 --rc geninfo_unexecuted_blocks=1 00:12:56.320 00:12:56.320 ' 00:12:56.320 12:57:36 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:56.320 12:57:36 -- nvmf/common.sh@7 -- # uname -s 00:12:56.320 12:57:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.320 12:57:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.320 12:57:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.320 12:57:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.320 12:57:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.320 12:57:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.320 12:57:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.320 12:57:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.320 12:57:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.320 12:57:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.320 12:57:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:56.320 12:57:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:12:56.320 12:57:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.320 12:57:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.320 12:57:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:56.320 12:57:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:56.320 12:57:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.320 12:57:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.320 12:57:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.320 12:57:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.320 12:57:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.320 12:57:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.320 12:57:36 -- paths/export.sh@5 -- # export PATH 00:12:56.320 12:57:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.320 12:57:36 -- nvmf/common.sh@46 -- # : 0 00:12:56.320 12:57:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:56.320 12:57:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:56.320 12:57:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:56.320 12:57:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.320 12:57:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.320 12:57:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:56.320 12:57:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:56.320 12:57:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:56.320 12:57:36 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:56.320 12:57:36 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:56.320 12:57:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:56.320 12:57:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.320 12:57:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:56.320 12:57:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:56.320 12:57:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:56.320 12:57:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.320 12:57:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.320 12:57:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.320 12:57:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:56.320 12:57:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:56.321 12:57:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:56.321 12:57:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:56.321 12:57:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:56.321 12:57:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:56.321 12:57:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.321 12:57:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.321 12:57:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:56.321 12:57:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:56.321 12:57:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:56.321 12:57:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:56.321 12:57:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:56.321 12:57:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.321 12:57:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:56.321 12:57:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:56.321 12:57:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:56.321 12:57:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:56.321 12:57:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:56.321 12:57:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:56.321 Cannot find device "nvmf_tgt_br" 00:12:56.321 12:57:36 -- nvmf/common.sh@154 -- # true 00:12:56.321 12:57:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.321 Cannot find device "nvmf_tgt_br2" 00:12:56.321 12:57:36 -- nvmf/common.sh@155 -- # true 00:12:56.321 12:57:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:56.321 12:57:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:56.321 Cannot find device "nvmf_tgt_br" 00:12:56.321 12:57:36 -- nvmf/common.sh@157 -- # true 00:12:56.321 12:57:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:56.321 Cannot find device "nvmf_tgt_br2" 00:12:56.321 12:57:36 -- nvmf/common.sh@158 -- # true 00:12:56.321 12:57:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:56.321 12:57:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:56.321 12:57:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.321 12:57:37 -- nvmf/common.sh@161 -- # true 00:12:56.321 12:57:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.321 12:57:37 -- nvmf/common.sh@162 -- # true 00:12:56.321 12:57:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.321 12:57:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.321 12:57:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.321 12:57:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.321 12:57:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.321 12:57:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.321 12:57:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.321 12:57:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:56.321 12:57:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:56.580 12:57:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:56.580 12:57:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:56.580 12:57:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:56.580 12:57:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:56.580 12:57:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.580 12:57:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.580 12:57:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.580 12:57:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:56.580 12:57:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:56.580 12:57:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.580 12:57:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.580 12:57:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.580 12:57:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.580 12:57:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.580 12:57:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:56.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:12:56.580 00:12:56.580 --- 10.0.0.2 ping statistics --- 00:12:56.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.580 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:56.580 12:57:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:56.580 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.580 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:12:56.580 00:12:56.580 --- 10.0.0.3 ping statistics --- 00:12:56.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.580 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:56.580 12:57:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:56.580 00:12:56.580 --- 10.0.0.1 ping statistics --- 00:12:56.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.580 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:56.580 12:57:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.580 12:57:37 -- nvmf/common.sh@421 -- # return 0 00:12:56.580 12:57:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:56.580 12:57:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.580 12:57:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:56.580 12:57:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:56.580 12:57:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.580 12:57:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:56.580 12:57:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:56.580 12:57:37 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:56.580 12:57:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:56.580 12:57:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:56.580 12:57:37 -- common/autotest_common.sh@10 -- # set +x 00:12:56.580 12:57:37 -- nvmf/common.sh@469 -- # nvmfpid=78952 00:12:56.580 12:57:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:56.580 12:57:37 -- nvmf/common.sh@470 -- # waitforlisten 78952 00:12:56.580 12:57:37 -- common/autotest_common.sh@829 -- # '[' -z 78952 ']' 00:12:56.580 12:57:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.580 12:57:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.580 12:57:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.580 12:57:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.580 12:57:37 -- common/autotest_common.sh@10 -- # set +x 00:12:56.580 [2024-12-13 12:57:37.294947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:56.580 [2024-12-13 12:57:37.295037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.857 [2024-12-13 12:57:37.428250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.857 [2024-12-13 12:57:37.496755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:56.857 [2024-12-13 12:57:37.496935] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.857 [2024-12-13 12:57:37.496948] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.857 [2024-12-13 12:57:37.496957] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.857 [2024-12-13 12:57:37.497102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.857 [2024-12-13 12:57:37.497785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.857 [2024-12-13 12:57:37.497811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.800 12:57:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.800 12:57:38 -- common/autotest_common.sh@862 -- # return 0 00:12:57.800 12:57:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:57.800 12:57:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.800 12:57:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.800 12:57:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.800 12:57:38 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:57.800 12:57:38 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:58.058 [2024-12-13 12:57:38.577239] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.058 12:57:38 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:58.317 12:57:38 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.317 [2024-12-13 12:57:39.087451] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.576 12:57:39 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:58.576 12:57:39 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:59.143 Malloc0 00:12:59.143 12:57:39 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:59.143 Delay0 00:12:59.143 12:57:39 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.402 12:57:40 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:59.660 NULL1 00:12:59.660 12:57:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:59.923 12:57:40 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79087 00:12:59.923 12:57:40 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:59.923 12:57:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:12:59.923 12:57:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.300 Read completed with error (sct=0, sc=11) 00:13:01.300 12:57:41 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.558 12:57:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:01.558 12:57:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:01.817 true 00:13:01.817 12:57:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:01.817 12:57:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.383 12:57:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.641 12:57:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:02.641 12:57:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:02.900 true 00:13:02.900 12:57:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:02.900 12:57:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.467 12:57:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.467 12:57:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:03.467 12:57:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:03.726 true 00:13:03.726 12:57:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:03.726 12:57:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.985 12:57:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.244 12:57:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:04.244 12:57:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:04.508 true 00:13:04.508 12:57:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:04.508 12:57:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.443 12:57:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.702 12:57:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:05.702 12:57:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:05.960 true 00:13:05.960 12:57:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:05.960 12:57:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.896 12:57:47 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.154 12:57:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:07.154 12:57:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:07.154 true 00:13:07.154 12:57:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:07.154 12:57:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.413 12:57:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.671 12:57:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:07.671 12:57:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:07.930 true 00:13:07.930 12:57:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:07.930 12:57:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.865 12:57:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.124 12:57:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:09.124 12:57:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:09.383 true 00:13:09.383 12:57:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:09.383 12:57:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.641 12:57:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.641 12:57:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:09.641 12:57:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:09.899 true 00:13:09.899 12:57:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:09.899 12:57:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.834 12:57:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.092 12:57:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:11.092 12:57:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:11.351 true 00:13:11.351 12:57:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:11.351 12:57:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.609 12:57:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.868 12:57:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:11.868 12:57:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:11.868 true 00:13:11.868 12:57:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:11.868 12:57:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.803 12:57:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.062 12:57:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:13.062 12:57:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:13.320 true 00:13:13.320 12:57:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:13.320 12:57:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.579 12:57:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.837 12:57:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:13.837 12:57:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:13.837 true 00:13:14.096 12:57:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:14.096 12:57:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.031 12:57:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.031 12:57:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:15.031 12:57:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:15.290 true 00:13:15.290 12:57:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:15.290 12:57:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.549 12:57:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.807 12:57:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:15.807 12:57:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:16.066 true 00:13:16.066 12:57:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:16.066 12:57:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.072 12:57:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.072 12:57:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:17.072 12:57:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:17.330 true 00:13:17.330 12:57:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:17.330 12:57:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.589 12:57:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.847 12:57:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:17.847 12:57:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:18.105 true 00:13:18.105 12:57:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:18.105 12:57:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.364 12:57:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.623 12:57:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:18.623 12:57:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:18.623 true 00:13:18.623 12:57:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:18.623 12:57:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.999 12:58:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.999 12:58:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:19.999 12:58:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:20.258 true 00:13:20.258 12:58:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:20.258 12:58:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.193 12:58:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.452 12:58:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:21.452 12:58:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:21.710 true 00:13:21.710 12:58:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:21.710 12:58:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.969 12:58:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.969 12:58:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:21.969 12:58:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:22.227 true 00:13:22.227 12:58:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:22.227 12:58:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.162 12:58:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.421 12:58:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:23.421 12:58:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:23.680 true 00:13:23.680 12:58:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:23.680 12:58:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.939 12:58:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.939 12:58:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:23.939 12:58:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:24.198 true 00:13:24.457 12:58:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:24.457 12:58:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.024 12:58:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.282 12:58:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:25.282 12:58:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:25.541 true 00:13:25.541 12:58:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:25.541 12:58:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.799 12:58:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.057 12:58:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:26.057 12:58:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:26.316 true 00:13:26.316 12:58:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:26.316 12:58:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.252 12:58:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.511 12:58:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:27.511 12:58:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:27.511 true 00:13:27.511 12:58:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:27.511 12:58:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.770 12:58:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.029 12:58:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:28.029 12:58:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:28.288 true 00:13:28.288 12:58:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:28.288 12:58:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.224 12:58:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.484 12:58:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:29.484 12:58:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:29.484 true 00:13:29.484 12:58:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:29.484 12:58:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.742 12:58:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.000 12:58:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:30.000 12:58:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:30.259 true 00:13:30.259 12:58:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:30.259 12:58:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.194 Initializing NVMe Controllers 00:13:31.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:31.194 Controller IO queue size 128, less than required. 00:13:31.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:31.194 Controller IO queue size 128, less than required. 00:13:31.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:31.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:31.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:31.194 Initialization complete. Launching workers. 00:13:31.194 ======================================================== 00:13:31.194 Latency(us) 00:13:31.194 Device Information : IOPS MiB/s Average min max 00:13:31.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 833.93 0.41 89698.83 2755.65 1061599.75 00:13:31.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13968.27 6.82 9163.45 2936.36 585578.06 00:13:31.194 ======================================================== 00:13:31.194 Total : 14802.20 7.23 13700.67 2755.65 1061599.75 00:13:31.194 00:13:31.194 12:58:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.453 12:58:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:31.453 12:58:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:31.453 true 00:13:31.712 12:58:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79087 00:13:31.712 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79087) - No such process 00:13:31.712 12:58:12 -- target/ns_hotplug_stress.sh@53 -- # wait 79087 00:13:31.712 12:58:12 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.712 12:58:12 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.970 12:58:12 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:31.970 12:58:12 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:31.970 12:58:12 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:31.970 12:58:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:31.970 12:58:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:32.228 null0 00:13:32.228 12:58:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:32.228 12:58:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:32.228 12:58:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:32.487 null1 00:13:32.487 12:58:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:32.487 12:58:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:32.487 12:58:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:32.746 null2 00:13:32.746 12:58:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:32.746 12:58:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:32.746 12:58:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:33.004 null3 00:13:33.004 12:58:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:33.004 12:58:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:33.004 12:58:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:33.263 null4 00:13:33.263 12:58:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:33.263 12:58:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:33.263 12:58:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:33.263 null5 00:13:33.521 12:58:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:33.521 12:58:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:33.521 12:58:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:33.780 null6 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:33.780 null7 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@66 -- # wait 80148 80150 80151 80153 80155 80157 80159 80161 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.780 12:58:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:34.039 12:58:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:34.039 12:58:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:34.039 12:58:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:34.297 12:58:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.297 12:58:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:34.297 12:58:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.297 12:58:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.297 12:58:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.297 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.297 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.297 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:34.297 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.297 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.297 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:34.555 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.813 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:35.071 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.331 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.331 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:35.331 12:58:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:35.331 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:35.331 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:35.331 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.331 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.331 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.331 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.657 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:35.936 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.937 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:36.195 12:58:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.453 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:36.453 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.453 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:36.453 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:36.453 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:36.453 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.453 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.453 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:36.453 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:36.711 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:36.970 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.970 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.970 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:36.970 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.970 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:36.970 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:36.970 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.970 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:36.970 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.228 12:58:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:37.487 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.745 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.003 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.003 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.003 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.003 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.003 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.003 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.003 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.003 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.003 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.261 12:58:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.520 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.778 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.036 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.294 12:58:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.294 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.294 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.294 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.294 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.552 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.552 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.552 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.552 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.552 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.552 12:58:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.552 12:58:20 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:39.552 12:58:20 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:39.552 12:58:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:39.552 12:58:20 -- nvmf/common.sh@116 -- # sync 00:13:39.552 12:58:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:39.552 12:58:20 -- nvmf/common.sh@119 -- # set +e 00:13:39.552 12:58:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:39.552 12:58:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:39.552 rmmod nvme_tcp 00:13:39.552 rmmod nvme_fabrics 00:13:39.552 rmmod nvme_keyring 00:13:39.552 12:58:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:39.552 12:58:20 -- nvmf/common.sh@123 -- # set -e 00:13:39.552 12:58:20 -- nvmf/common.sh@124 -- # return 0 00:13:39.552 12:58:20 -- nvmf/common.sh@477 -- # '[' -n 78952 ']' 00:13:39.552 12:58:20 -- nvmf/common.sh@478 -- # killprocess 78952 00:13:39.552 12:58:20 -- common/autotest_common.sh@936 -- # '[' -z 78952 ']' 00:13:39.552 12:58:20 -- common/autotest_common.sh@940 -- # kill -0 78952 00:13:39.552 12:58:20 -- common/autotest_common.sh@941 -- # uname 00:13:39.552 12:58:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:39.552 12:58:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78952 00:13:39.810 killing process with pid 78952 00:13:39.810 12:58:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:39.810 12:58:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:39.810 12:58:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78952' 00:13:39.810 12:58:20 -- common/autotest_common.sh@955 -- # kill 78952 00:13:39.810 12:58:20 -- common/autotest_common.sh@960 -- # wait 78952 00:13:39.810 12:58:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:39.810 12:58:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:39.810 12:58:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:39.810 12:58:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.810 12:58:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:39.810 12:58:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.810 12:58:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.810 12:58:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.810 12:58:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:39.810 00:13:39.810 real 0m43.831s 00:13:39.810 user 3m30.468s 00:13:39.810 sys 0m12.225s 00:13:39.810 12:58:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:39.811 12:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:39.811 ************************************ 00:13:39.811 END TEST nvmf_ns_hotplug_stress 00:13:39.811 ************************************ 00:13:40.069 12:58:20 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.069 12:58:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:40.069 12:58:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:40.069 12:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:40.069 ************************************ 00:13:40.069 START TEST nvmf_connect_stress 00:13:40.069 ************************************ 00:13:40.069 12:58:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.069 * Looking for test storage... 00:13:40.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.070 12:58:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:40.070 12:58:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:40.070 12:58:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:40.070 12:58:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:40.070 12:58:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:40.070 12:58:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:40.070 12:58:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:40.070 12:58:20 -- scripts/common.sh@335 -- # IFS=.-: 00:13:40.070 12:58:20 -- scripts/common.sh@335 -- # read -ra ver1 00:13:40.070 12:58:20 -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.070 12:58:20 -- scripts/common.sh@336 -- # read -ra ver2 00:13:40.070 12:58:20 -- scripts/common.sh@337 -- # local 'op=<' 00:13:40.070 12:58:20 -- scripts/common.sh@339 -- # ver1_l=2 00:13:40.070 12:58:20 -- scripts/common.sh@340 -- # ver2_l=1 00:13:40.070 12:58:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:40.070 12:58:20 -- scripts/common.sh@343 -- # case "$op" in 00:13:40.070 12:58:20 -- scripts/common.sh@344 -- # : 1 00:13:40.070 12:58:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:40.070 12:58:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.070 12:58:20 -- scripts/common.sh@364 -- # decimal 1 00:13:40.070 12:58:20 -- scripts/common.sh@352 -- # local d=1 00:13:40.070 12:58:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.070 12:58:20 -- scripts/common.sh@354 -- # echo 1 00:13:40.070 12:58:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:40.070 12:58:20 -- scripts/common.sh@365 -- # decimal 2 00:13:40.070 12:58:20 -- scripts/common.sh@352 -- # local d=2 00:13:40.070 12:58:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.070 12:58:20 -- scripts/common.sh@354 -- # echo 2 00:13:40.070 12:58:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:40.070 12:58:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:40.070 12:58:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:40.070 12:58:20 -- scripts/common.sh@367 -- # return 0 00:13:40.070 12:58:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.070 12:58:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:40.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.070 --rc genhtml_branch_coverage=1 00:13:40.070 --rc genhtml_function_coverage=1 00:13:40.070 --rc genhtml_legend=1 00:13:40.070 --rc geninfo_all_blocks=1 00:13:40.070 --rc geninfo_unexecuted_blocks=1 00:13:40.070 00:13:40.070 ' 00:13:40.070 12:58:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:40.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.070 --rc genhtml_branch_coverage=1 00:13:40.070 --rc genhtml_function_coverage=1 00:13:40.070 --rc genhtml_legend=1 00:13:40.070 --rc geninfo_all_blocks=1 00:13:40.070 --rc geninfo_unexecuted_blocks=1 00:13:40.070 00:13:40.070 ' 00:13:40.070 12:58:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:40.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.070 --rc genhtml_branch_coverage=1 00:13:40.070 --rc genhtml_function_coverage=1 00:13:40.070 --rc genhtml_legend=1 00:13:40.070 --rc geninfo_all_blocks=1 00:13:40.070 --rc geninfo_unexecuted_blocks=1 00:13:40.070 00:13:40.070 ' 00:13:40.070 12:58:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:40.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.070 --rc genhtml_branch_coverage=1 00:13:40.070 --rc genhtml_function_coverage=1 00:13:40.070 --rc genhtml_legend=1 00:13:40.070 --rc geninfo_all_blocks=1 00:13:40.070 --rc geninfo_unexecuted_blocks=1 00:13:40.070 00:13:40.070 ' 00:13:40.070 12:58:20 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:40.070 12:58:20 -- nvmf/common.sh@7 -- # uname -s 00:13:40.070 12:58:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.070 12:58:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.070 12:58:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.070 12:58:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.070 12:58:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.070 12:58:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.070 12:58:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.070 12:58:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.070 12:58:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.070 12:58:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.070 12:58:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:13:40.070 12:58:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:13:40.070 12:58:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.070 12:58:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.070 12:58:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:40.070 12:58:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.070 12:58:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.070 12:58:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.070 12:58:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.070 12:58:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.070 12:58:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.070 12:58:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.070 12:58:20 -- paths/export.sh@5 -- # export PATH 00:13:40.070 12:58:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.070 12:58:20 -- nvmf/common.sh@46 -- # : 0 00:13:40.070 12:58:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:40.070 12:58:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:40.070 12:58:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:40.070 12:58:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.070 12:58:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.070 12:58:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:40.070 12:58:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:40.070 12:58:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:40.070 12:58:20 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:40.070 12:58:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:40.070 12:58:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.070 12:58:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:40.070 12:58:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:40.070 12:58:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:40.070 12:58:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.070 12:58:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.070 12:58:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.070 12:58:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:40.070 12:58:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:40.070 12:58:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:40.070 12:58:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:40.070 12:58:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:40.070 12:58:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:40.070 12:58:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.070 12:58:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.070 12:58:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:40.070 12:58:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:40.070 12:58:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:40.070 12:58:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:40.070 12:58:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:40.070 12:58:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.070 12:58:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:40.070 12:58:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:40.070 12:58:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:40.070 12:58:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:40.070 12:58:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:40.070 12:58:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:40.070 Cannot find device "nvmf_tgt_br" 00:13:40.070 12:58:20 -- nvmf/common.sh@154 -- # true 00:13:40.070 12:58:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.070 Cannot find device "nvmf_tgt_br2" 00:13:40.070 12:58:20 -- nvmf/common.sh@155 -- # true 00:13:40.070 12:58:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:40.070 12:58:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:40.328 Cannot find device "nvmf_tgt_br" 00:13:40.328 12:58:20 -- nvmf/common.sh@157 -- # true 00:13:40.328 12:58:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:40.328 Cannot find device "nvmf_tgt_br2" 00:13:40.328 12:58:20 -- nvmf/common.sh@158 -- # true 00:13:40.328 12:58:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:40.328 12:58:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:40.328 12:58:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:40.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.328 12:58:20 -- nvmf/common.sh@161 -- # true 00:13:40.328 12:58:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:40.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.328 12:58:20 -- nvmf/common.sh@162 -- # true 00:13:40.328 12:58:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:40.328 12:58:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:40.328 12:58:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:40.328 12:58:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:40.328 12:58:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:40.328 12:58:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:40.328 12:58:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:40.328 12:58:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:40.328 12:58:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:40.328 12:58:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:40.328 12:58:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:40.328 12:58:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:40.328 12:58:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:40.328 12:58:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:40.328 12:58:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:40.328 12:58:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:40.328 12:58:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:40.328 12:58:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:40.328 12:58:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:40.328 12:58:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:40.328 12:58:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:40.328 12:58:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:40.328 12:58:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:40.328 12:58:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:40.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:40.328 00:13:40.329 --- 10.0.0.2 ping statistics --- 00:13:40.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.329 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:40.329 12:58:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:40.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:40.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:13:40.329 00:13:40.329 --- 10.0.0.3 ping statistics --- 00:13:40.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.329 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:40.329 12:58:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:40.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:40.587 00:13:40.587 --- 10.0.0.1 ping statistics --- 00:13:40.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.587 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:40.587 12:58:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.587 12:58:21 -- nvmf/common.sh@421 -- # return 0 00:13:40.587 12:58:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:40.587 12:58:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.587 12:58:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:40.587 12:58:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:40.587 12:58:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.587 12:58:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:40.587 12:58:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:40.587 12:58:21 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:40.587 12:58:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:40.587 12:58:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.587 12:58:21 -- common/autotest_common.sh@10 -- # set +x 00:13:40.587 12:58:21 -- nvmf/common.sh@469 -- # nvmfpid=81463 00:13:40.587 12:58:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:40.587 12:58:21 -- nvmf/common.sh@470 -- # waitforlisten 81463 00:13:40.587 12:58:21 -- common/autotest_common.sh@829 -- # '[' -z 81463 ']' 00:13:40.587 12:58:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.587 12:58:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.587 12:58:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.587 12:58:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.587 12:58:21 -- common/autotest_common.sh@10 -- # set +x 00:13:40.587 [2024-12-13 12:58:21.189989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:40.587 [2024-12-13 12:58:21.190577] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.587 [2024-12-13 12:58:21.327655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.847 [2024-12-13 12:58:21.389264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:40.847 [2024-12-13 12:58:21.389428] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.847 [2024-12-13 12:58:21.389442] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.847 [2024-12-13 12:58:21.389450] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.847 [2024-12-13 12:58:21.389749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.847 [2024-12-13 12:58:21.390078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.847 [2024-12-13 12:58:21.390125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.782 12:58:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:41.782 12:58:22 -- common/autotest_common.sh@862 -- # return 0 00:13:41.782 12:58:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:41.782 12:58:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:41.782 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:13:41.782 12:58:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.782 12:58:22 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.782 12:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.782 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:13:41.782 [2024-12-13 12:58:22.248021] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.782 12:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.782 12:58:22 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.782 12:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.782 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:13:41.782 12:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.782 12:58:22 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.782 12:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.782 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:13:41.783 [2024-12-13 12:58:22.265774] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.783 12:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.783 12:58:22 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:41.783 12:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.783 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:13:41.783 NULL1 00:13:41.783 12:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.783 12:58:22 -- target/connect_stress.sh@21 -- # PERF_PID=81515 00:13:41.783 12:58:22 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:41.783 12:58:22 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:41.783 12:58:22 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.783 12:58:22 -- target/connect_stress.sh@28 -- # cat 00:13:41.783 12:58:22 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:41.783 12:58:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.783 12:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.783 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:13:42.041 12:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.041 12:58:22 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:42.042 12:58:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.042 12:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.042 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:13:42.300 12:58:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.300 12:58:23 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:42.300 12:58:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.300 12:58:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.300 12:58:23 -- common/autotest_common.sh@10 -- # set +x 00:13:42.867 12:58:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.867 12:58:23 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:42.867 12:58:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.867 12:58:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.867 12:58:23 -- common/autotest_common.sh@10 -- # set +x 00:13:43.126 12:58:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.126 12:58:23 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:43.126 12:58:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.126 12:58:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.126 12:58:23 -- common/autotest_common.sh@10 -- # set +x 00:13:43.384 12:58:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.384 12:58:23 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:43.384 12:58:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.384 12:58:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.384 12:58:23 -- common/autotest_common.sh@10 -- # set +x 00:13:43.643 12:58:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.643 12:58:24 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:43.643 12:58:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.643 12:58:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.643 12:58:24 -- common/autotest_common.sh@10 -- # set +x 00:13:43.901 12:58:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.901 12:58:24 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:43.901 12:58:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.901 12:58:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.901 12:58:24 -- common/autotest_common.sh@10 -- # set +x 00:13:44.468 12:58:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.468 12:58:24 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:44.468 12:58:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.468 12:58:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.468 12:58:24 -- common/autotest_common.sh@10 -- # set +x 00:13:44.727 12:58:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.727 12:58:25 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:44.727 12:58:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.727 12:58:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.727 12:58:25 -- common/autotest_common.sh@10 -- # set +x 00:13:44.985 12:58:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.985 12:58:25 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:44.985 12:58:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.985 12:58:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.985 12:58:25 -- common/autotest_common.sh@10 -- # set +x 00:13:45.243 12:58:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.243 12:58:25 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:45.243 12:58:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.243 12:58:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.243 12:58:25 -- common/autotest_common.sh@10 -- # set +x 00:13:45.502 12:58:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.502 12:58:26 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:45.502 12:58:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.502 12:58:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.502 12:58:26 -- common/autotest_common.sh@10 -- # set +x 00:13:46.069 12:58:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.069 12:58:26 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:46.069 12:58:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.069 12:58:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.069 12:58:26 -- common/autotest_common.sh@10 -- # set +x 00:13:46.327 12:58:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.327 12:58:26 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:46.327 12:58:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.327 12:58:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.327 12:58:26 -- common/autotest_common.sh@10 -- # set +x 00:13:46.586 12:58:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.586 12:58:27 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:46.586 12:58:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.586 12:58:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.586 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:46.844 12:58:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.844 12:58:27 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:46.844 12:58:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.845 12:58:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.845 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:47.103 12:58:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.103 12:58:27 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:47.103 12:58:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.103 12:58:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.103 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:47.671 12:58:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.671 12:58:28 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:47.671 12:58:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.671 12:58:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.671 12:58:28 -- common/autotest_common.sh@10 -- # set +x 00:13:47.930 12:58:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.930 12:58:28 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:47.930 12:58:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.930 12:58:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.930 12:58:28 -- common/autotest_common.sh@10 -- # set +x 00:13:48.188 12:58:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.188 12:58:28 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:48.188 12:58:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.188 12:58:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.188 12:58:28 -- common/autotest_common.sh@10 -- # set +x 00:13:48.447 12:58:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.447 12:58:29 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:48.447 12:58:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.447 12:58:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.447 12:58:29 -- common/autotest_common.sh@10 -- # set +x 00:13:48.706 12:58:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.706 12:58:29 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:48.706 12:58:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.706 12:58:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.706 12:58:29 -- common/autotest_common.sh@10 -- # set +x 00:13:49.273 12:58:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.273 12:58:29 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:49.273 12:58:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.273 12:58:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.273 12:58:29 -- common/autotest_common.sh@10 -- # set +x 00:13:49.532 12:58:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.532 12:58:30 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:49.532 12:58:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.532 12:58:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.532 12:58:30 -- common/autotest_common.sh@10 -- # set +x 00:13:49.790 12:58:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.790 12:58:30 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:49.790 12:58:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.790 12:58:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.790 12:58:30 -- common/autotest_common.sh@10 -- # set +x 00:13:50.049 12:58:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.049 12:58:30 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:50.049 12:58:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.049 12:58:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.049 12:58:30 -- common/autotest_common.sh@10 -- # set +x 00:13:50.308 12:58:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.308 12:58:31 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:50.308 12:58:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.308 12:58:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.308 12:58:31 -- common/autotest_common.sh@10 -- # set +x 00:13:50.876 12:58:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.876 12:58:31 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:50.876 12:58:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.876 12:58:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.876 12:58:31 -- common/autotest_common.sh@10 -- # set +x 00:13:51.134 12:58:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.134 12:58:31 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:51.134 12:58:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.134 12:58:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.134 12:58:31 -- common/autotest_common.sh@10 -- # set +x 00:13:51.393 12:58:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.393 12:58:32 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:51.393 12:58:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.393 12:58:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.393 12:58:32 -- common/autotest_common.sh@10 -- # set +x 00:13:51.651 12:58:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.651 12:58:32 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:51.651 12:58:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.652 12:58:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.652 12:58:32 -- common/autotest_common.sh@10 -- # set +x 00:13:51.910 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.910 12:58:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.910 12:58:32 -- target/connect_stress.sh@34 -- # kill -0 81515 00:13:51.910 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81515) - No such process 00:13:51.910 12:58:32 -- target/connect_stress.sh@38 -- # wait 81515 00:13:51.910 12:58:32 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:51.910 12:58:32 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:51.910 12:58:32 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:51.910 12:58:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:51.910 12:58:32 -- nvmf/common.sh@116 -- # sync 00:13:52.169 12:58:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:52.169 12:58:32 -- nvmf/common.sh@119 -- # set +e 00:13:52.169 12:58:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:52.169 12:58:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:52.169 rmmod nvme_tcp 00:13:52.169 rmmod nvme_fabrics 00:13:52.169 rmmod nvme_keyring 00:13:52.169 12:58:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:52.169 12:58:32 -- nvmf/common.sh@123 -- # set -e 00:13:52.169 12:58:32 -- nvmf/common.sh@124 -- # return 0 00:13:52.169 12:58:32 -- nvmf/common.sh@477 -- # '[' -n 81463 ']' 00:13:52.169 12:58:32 -- nvmf/common.sh@478 -- # killprocess 81463 00:13:52.169 12:58:32 -- common/autotest_common.sh@936 -- # '[' -z 81463 ']' 00:13:52.169 12:58:32 -- common/autotest_common.sh@940 -- # kill -0 81463 00:13:52.169 12:58:32 -- common/autotest_common.sh@941 -- # uname 00:13:52.169 12:58:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:52.169 12:58:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81463 00:13:52.169 killing process with pid 81463 00:13:52.169 12:58:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:52.169 12:58:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:52.169 12:58:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81463' 00:13:52.169 12:58:32 -- common/autotest_common.sh@955 -- # kill 81463 00:13:52.169 12:58:32 -- common/autotest_common.sh@960 -- # wait 81463 00:13:52.428 12:58:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:52.428 12:58:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:52.428 12:58:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:52.428 12:58:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:52.428 12:58:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:52.428 12:58:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.428 12:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.428 12:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.428 12:58:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:52.428 00:13:52.428 real 0m12.424s 00:13:52.428 user 0m41.524s 00:13:52.428 sys 0m3.203s 00:13:52.428 12:58:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:52.428 12:58:33 -- common/autotest_common.sh@10 -- # set +x 00:13:52.428 ************************************ 00:13:52.428 END TEST nvmf_connect_stress 00:13:52.428 ************************************ 00:13:52.428 12:58:33 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:52.428 12:58:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:52.428 12:58:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:52.428 12:58:33 -- common/autotest_common.sh@10 -- # set +x 00:13:52.428 ************************************ 00:13:52.428 START TEST nvmf_fused_ordering 00:13:52.428 ************************************ 00:13:52.428 12:58:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:52.428 * Looking for test storage... 00:13:52.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:52.428 12:58:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:52.428 12:58:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:52.428 12:58:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:52.687 12:58:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:52.687 12:58:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:52.687 12:58:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:52.687 12:58:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:52.687 12:58:33 -- scripts/common.sh@335 -- # IFS=.-: 00:13:52.687 12:58:33 -- scripts/common.sh@335 -- # read -ra ver1 00:13:52.687 12:58:33 -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.687 12:58:33 -- scripts/common.sh@336 -- # read -ra ver2 00:13:52.687 12:58:33 -- scripts/common.sh@337 -- # local 'op=<' 00:13:52.687 12:58:33 -- scripts/common.sh@339 -- # ver1_l=2 00:13:52.687 12:58:33 -- scripts/common.sh@340 -- # ver2_l=1 00:13:52.687 12:58:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:52.687 12:58:33 -- scripts/common.sh@343 -- # case "$op" in 00:13:52.687 12:58:33 -- scripts/common.sh@344 -- # : 1 00:13:52.687 12:58:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:52.687 12:58:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.687 12:58:33 -- scripts/common.sh@364 -- # decimal 1 00:13:52.687 12:58:33 -- scripts/common.sh@352 -- # local d=1 00:13:52.687 12:58:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.687 12:58:33 -- scripts/common.sh@354 -- # echo 1 00:13:52.687 12:58:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:52.687 12:58:33 -- scripts/common.sh@365 -- # decimal 2 00:13:52.687 12:58:33 -- scripts/common.sh@352 -- # local d=2 00:13:52.687 12:58:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.687 12:58:33 -- scripts/common.sh@354 -- # echo 2 00:13:52.687 12:58:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:52.687 12:58:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:52.687 12:58:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:52.687 12:58:33 -- scripts/common.sh@367 -- # return 0 00:13:52.687 12:58:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.687 12:58:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:52.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.687 --rc genhtml_branch_coverage=1 00:13:52.687 --rc genhtml_function_coverage=1 00:13:52.687 --rc genhtml_legend=1 00:13:52.687 --rc geninfo_all_blocks=1 00:13:52.687 --rc geninfo_unexecuted_blocks=1 00:13:52.687 00:13:52.687 ' 00:13:52.687 12:58:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:52.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.687 --rc genhtml_branch_coverage=1 00:13:52.687 --rc genhtml_function_coverage=1 00:13:52.687 --rc genhtml_legend=1 00:13:52.687 --rc geninfo_all_blocks=1 00:13:52.687 --rc geninfo_unexecuted_blocks=1 00:13:52.687 00:13:52.687 ' 00:13:52.687 12:58:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:52.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.687 --rc genhtml_branch_coverage=1 00:13:52.687 --rc genhtml_function_coverage=1 00:13:52.687 --rc genhtml_legend=1 00:13:52.687 --rc geninfo_all_blocks=1 00:13:52.687 --rc geninfo_unexecuted_blocks=1 00:13:52.687 00:13:52.687 ' 00:13:52.687 12:58:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:52.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.687 --rc genhtml_branch_coverage=1 00:13:52.687 --rc genhtml_function_coverage=1 00:13:52.687 --rc genhtml_legend=1 00:13:52.687 --rc geninfo_all_blocks=1 00:13:52.687 --rc geninfo_unexecuted_blocks=1 00:13:52.687 00:13:52.687 ' 00:13:52.687 12:58:33 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.687 12:58:33 -- nvmf/common.sh@7 -- # uname -s 00:13:52.687 12:58:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.687 12:58:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.687 12:58:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.687 12:58:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.687 12:58:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.687 12:58:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.687 12:58:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.687 12:58:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.687 12:58:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.687 12:58:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.687 12:58:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:13:52.687 12:58:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:13:52.687 12:58:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.687 12:58:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.687 12:58:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:52.687 12:58:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.687 12:58:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.687 12:58:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.687 12:58:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.688 12:58:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.688 12:58:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.688 12:58:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.688 12:58:33 -- paths/export.sh@5 -- # export PATH 00:13:52.688 12:58:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.688 12:58:33 -- nvmf/common.sh@46 -- # : 0 00:13:52.688 12:58:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:52.688 12:58:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:52.688 12:58:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:52.688 12:58:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.688 12:58:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.688 12:58:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:52.688 12:58:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:52.688 12:58:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:52.688 12:58:33 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:52.688 12:58:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:52.688 12:58:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.688 12:58:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:52.688 12:58:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:52.688 12:58:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:52.688 12:58:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.688 12:58:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.688 12:58:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.688 12:58:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:52.688 12:58:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:52.688 12:58:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:52.688 12:58:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:52.688 12:58:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:52.688 12:58:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:52.688 12:58:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.688 12:58:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.688 12:58:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:52.688 12:58:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:52.688 12:58:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:52.688 12:58:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:52.688 12:58:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:52.688 12:58:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.688 12:58:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:52.688 12:58:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:52.688 12:58:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:52.688 12:58:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:52.688 12:58:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:52.688 12:58:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:52.688 Cannot find device "nvmf_tgt_br" 00:13:52.688 12:58:33 -- nvmf/common.sh@154 -- # true 00:13:52.688 12:58:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.688 Cannot find device "nvmf_tgt_br2" 00:13:52.688 12:58:33 -- nvmf/common.sh@155 -- # true 00:13:52.688 12:58:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:52.688 12:58:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:52.688 Cannot find device "nvmf_tgt_br" 00:13:52.688 12:58:33 -- nvmf/common.sh@157 -- # true 00:13:52.688 12:58:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:52.688 Cannot find device "nvmf_tgt_br2" 00:13:52.688 12:58:33 -- nvmf/common.sh@158 -- # true 00:13:52.688 12:58:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:52.688 12:58:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:52.688 12:58:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.688 12:58:33 -- nvmf/common.sh@161 -- # true 00:13:52.688 12:58:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.688 12:58:33 -- nvmf/common.sh@162 -- # true 00:13:52.688 12:58:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:52.688 12:58:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:52.688 12:58:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:52.688 12:58:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:52.688 12:58:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:52.688 12:58:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:52.947 12:58:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:52.947 12:58:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:52.947 12:58:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:52.947 12:58:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:52.947 12:58:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:52.947 12:58:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:52.947 12:58:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:52.947 12:58:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:52.947 12:58:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:52.947 12:58:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:52.947 12:58:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:52.947 12:58:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:52.947 12:58:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:52.947 12:58:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:52.947 12:58:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:52.947 12:58:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:52.947 12:58:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.947 12:58:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:52.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:13:52.947 00:13:52.947 --- 10.0.0.2 ping statistics --- 00:13:52.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.947 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:52.947 12:58:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:52.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:13:52.947 00:13:52.947 --- 10.0.0.3 ping statistics --- 00:13:52.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.947 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:52.947 12:58:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:52.947 00:13:52.947 --- 10.0.0.1 ping statistics --- 00:13:52.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.947 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:52.947 12:58:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.947 12:58:33 -- nvmf/common.sh@421 -- # return 0 00:13:52.947 12:58:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:52.947 12:58:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.947 12:58:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:52.947 12:58:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:52.947 12:58:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.947 12:58:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:52.947 12:58:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:52.947 12:58:33 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:52.947 12:58:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:52.947 12:58:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.947 12:58:33 -- common/autotest_common.sh@10 -- # set +x 00:13:52.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.947 12:58:33 -- nvmf/common.sh@469 -- # nvmfpid=81848 00:13:52.947 12:58:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:52.947 12:58:33 -- nvmf/common.sh@470 -- # waitforlisten 81848 00:13:52.947 12:58:33 -- common/autotest_common.sh@829 -- # '[' -z 81848 ']' 00:13:52.947 12:58:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.947 12:58:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.947 12:58:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.947 12:58:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.947 12:58:33 -- common/autotest_common.sh@10 -- # set +x 00:13:52.947 [2024-12-13 12:58:33.644772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:52.947 [2024-12-13 12:58:33.644852] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.206 [2024-12-13 12:58:33.779377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.206 [2024-12-13 12:58:33.836225] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:53.206 [2024-12-13 12:58:33.836384] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.206 [2024-12-13 12:58:33.836397] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.206 [2024-12-13 12:58:33.836405] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.206 [2024-12-13 12:58:33.836433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.183 12:58:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.183 12:58:34 -- common/autotest_common.sh@862 -- # return 0 00:13:54.183 12:58:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:54.183 12:58:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:54.183 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:13:54.183 12:58:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.183 12:58:34 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.183 12:58:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.183 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:13:54.183 [2024-12-13 12:58:34.711170] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.183 12:58:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.183 12:58:34 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:54.183 12:58:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.183 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:13:54.183 12:58:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.183 12:58:34 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.183 12:58:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.183 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:13:54.183 [2024-12-13 12:58:34.727322] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.183 12:58:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.183 12:58:34 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:54.183 12:58:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.183 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:13:54.183 NULL1 00:13:54.183 12:58:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.183 12:58:34 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:54.183 12:58:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.183 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:13:54.183 12:58:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.183 12:58:34 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:54.183 12:58:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.183 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:13:54.183 12:58:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.183 12:58:34 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:54.183 [2024-12-13 12:58:34.777197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:54.183 [2024-12-13 12:58:34.777243] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81898 ] 00:13:54.446 Attached to nqn.2016-06.io.spdk:cnode1 00:13:54.446 Namespace ID: 1 size: 1GB 00:13:54.446 fused_ordering(0) 00:13:54.446 fused_ordering(1) 00:13:54.446 fused_ordering(2) 00:13:54.446 fused_ordering(3) 00:13:54.446 fused_ordering(4) 00:13:54.446 fused_ordering(5) 00:13:54.446 fused_ordering(6) 00:13:54.446 fused_ordering(7) 00:13:54.446 fused_ordering(8) 00:13:54.446 fused_ordering(9) 00:13:54.446 fused_ordering(10) 00:13:54.446 fused_ordering(11) 00:13:54.446 fused_ordering(12) 00:13:54.446 fused_ordering(13) 00:13:54.446 fused_ordering(14) 00:13:54.446 fused_ordering(15) 00:13:54.446 fused_ordering(16) 00:13:54.446 fused_ordering(17) 00:13:54.446 fused_ordering(18) 00:13:54.446 fused_ordering(19) 00:13:54.446 fused_ordering(20) 00:13:54.446 fused_ordering(21) 00:13:54.446 fused_ordering(22) 00:13:54.446 fused_ordering(23) 00:13:54.446 fused_ordering(24) 00:13:54.446 fused_ordering(25) 00:13:54.446 fused_ordering(26) 00:13:54.446 fused_ordering(27) 00:13:54.446 fused_ordering(28) 00:13:54.446 fused_ordering(29) 00:13:54.446 fused_ordering(30) 00:13:54.446 fused_ordering(31) 00:13:54.446 fused_ordering(32) 00:13:54.446 fused_ordering(33) 00:13:54.446 fused_ordering(34) 00:13:54.446 fused_ordering(35) 00:13:54.446 fused_ordering(36) 00:13:54.446 fused_ordering(37) 00:13:54.446 fused_ordering(38) 00:13:54.446 fused_ordering(39) 00:13:54.446 fused_ordering(40) 00:13:54.446 fused_ordering(41) 00:13:54.446 fused_ordering(42) 00:13:54.446 fused_ordering(43) 00:13:54.446 fused_ordering(44) 00:13:54.446 fused_ordering(45) 00:13:54.446 fused_ordering(46) 00:13:54.446 fused_ordering(47) 00:13:54.446 fused_ordering(48) 00:13:54.446 fused_ordering(49) 00:13:54.446 fused_ordering(50) 00:13:54.446 fused_ordering(51) 00:13:54.446 fused_ordering(52) 00:13:54.446 fused_ordering(53) 00:13:54.446 fused_ordering(54) 00:13:54.446 fused_ordering(55) 00:13:54.446 fused_ordering(56) 00:13:54.446 fused_ordering(57) 00:13:54.446 fused_ordering(58) 00:13:54.446 fused_ordering(59) 00:13:54.446 fused_ordering(60) 00:13:54.446 fused_ordering(61) 00:13:54.446 fused_ordering(62) 00:13:54.446 fused_ordering(63) 00:13:54.446 fused_ordering(64) 00:13:54.446 fused_ordering(65) 00:13:54.446 fused_ordering(66) 00:13:54.446 fused_ordering(67) 00:13:54.446 fused_ordering(68) 00:13:54.446 fused_ordering(69) 00:13:54.446 fused_ordering(70) 00:13:54.446 fused_ordering(71) 00:13:54.446 fused_ordering(72) 00:13:54.446 fused_ordering(73) 00:13:54.446 fused_ordering(74) 00:13:54.446 fused_ordering(75) 00:13:54.446 fused_ordering(76) 00:13:54.446 fused_ordering(77) 00:13:54.446 fused_ordering(78) 00:13:54.446 fused_ordering(79) 00:13:54.446 fused_ordering(80) 00:13:54.446 fused_ordering(81) 00:13:54.446 fused_ordering(82) 00:13:54.446 fused_ordering(83) 00:13:54.446 fused_ordering(84) 00:13:54.446 fused_ordering(85) 00:13:54.446 fused_ordering(86) 00:13:54.446 fused_ordering(87) 00:13:54.446 fused_ordering(88) 00:13:54.446 fused_ordering(89) 00:13:54.446 fused_ordering(90) 00:13:54.446 fused_ordering(91) 00:13:54.446 fused_ordering(92) 00:13:54.446 fused_ordering(93) 00:13:54.446 fused_ordering(94) 00:13:54.446 fused_ordering(95) 00:13:54.446 fused_ordering(96) 00:13:54.446 fused_ordering(97) 00:13:54.446 fused_ordering(98) 00:13:54.446 fused_ordering(99) 00:13:54.446 fused_ordering(100) 00:13:54.446 fused_ordering(101) 00:13:54.446 fused_ordering(102) 00:13:54.446 fused_ordering(103) 00:13:54.446 fused_ordering(104) 00:13:54.446 fused_ordering(105) 00:13:54.446 fused_ordering(106) 00:13:54.446 fused_ordering(107) 00:13:54.446 fused_ordering(108) 00:13:54.446 fused_ordering(109) 00:13:54.446 fused_ordering(110) 00:13:54.446 fused_ordering(111) 00:13:54.446 fused_ordering(112) 00:13:54.446 fused_ordering(113) 00:13:54.446 fused_ordering(114) 00:13:54.446 fused_ordering(115) 00:13:54.446 fused_ordering(116) 00:13:54.446 fused_ordering(117) 00:13:54.446 fused_ordering(118) 00:13:54.446 fused_ordering(119) 00:13:54.446 fused_ordering(120) 00:13:54.446 fused_ordering(121) 00:13:54.446 fused_ordering(122) 00:13:54.446 fused_ordering(123) 00:13:54.446 fused_ordering(124) 00:13:54.446 fused_ordering(125) 00:13:54.446 fused_ordering(126) 00:13:54.446 fused_ordering(127) 00:13:54.446 fused_ordering(128) 00:13:54.446 fused_ordering(129) 00:13:54.446 fused_ordering(130) 00:13:54.446 fused_ordering(131) 00:13:54.446 fused_ordering(132) 00:13:54.446 fused_ordering(133) 00:13:54.446 fused_ordering(134) 00:13:54.446 fused_ordering(135) 00:13:54.446 fused_ordering(136) 00:13:54.446 fused_ordering(137) 00:13:54.446 fused_ordering(138) 00:13:54.446 fused_ordering(139) 00:13:54.446 fused_ordering(140) 00:13:54.446 fused_ordering(141) 00:13:54.446 fused_ordering(142) 00:13:54.446 fused_ordering(143) 00:13:54.446 fused_ordering(144) 00:13:54.446 fused_ordering(145) 00:13:54.446 fused_ordering(146) 00:13:54.446 fused_ordering(147) 00:13:54.446 fused_ordering(148) 00:13:54.446 fused_ordering(149) 00:13:54.446 fused_ordering(150) 00:13:54.446 fused_ordering(151) 00:13:54.446 fused_ordering(152) 00:13:54.446 fused_ordering(153) 00:13:54.446 fused_ordering(154) 00:13:54.446 fused_ordering(155) 00:13:54.446 fused_ordering(156) 00:13:54.446 fused_ordering(157) 00:13:54.446 fused_ordering(158) 00:13:54.446 fused_ordering(159) 00:13:54.446 fused_ordering(160) 00:13:54.446 fused_ordering(161) 00:13:54.446 fused_ordering(162) 00:13:54.446 fused_ordering(163) 00:13:54.446 fused_ordering(164) 00:13:54.446 fused_ordering(165) 00:13:54.446 fused_ordering(166) 00:13:54.446 fused_ordering(167) 00:13:54.446 fused_ordering(168) 00:13:54.446 fused_ordering(169) 00:13:54.446 fused_ordering(170) 00:13:54.446 fused_ordering(171) 00:13:54.446 fused_ordering(172) 00:13:54.446 fused_ordering(173) 00:13:54.446 fused_ordering(174) 00:13:54.446 fused_ordering(175) 00:13:54.446 fused_ordering(176) 00:13:54.446 fused_ordering(177) 00:13:54.446 fused_ordering(178) 00:13:54.446 fused_ordering(179) 00:13:54.446 fused_ordering(180) 00:13:54.446 fused_ordering(181) 00:13:54.446 fused_ordering(182) 00:13:54.446 fused_ordering(183) 00:13:54.446 fused_ordering(184) 00:13:54.446 fused_ordering(185) 00:13:54.446 fused_ordering(186) 00:13:54.446 fused_ordering(187) 00:13:54.446 fused_ordering(188) 00:13:54.446 fused_ordering(189) 00:13:54.446 fused_ordering(190) 00:13:54.446 fused_ordering(191) 00:13:54.446 fused_ordering(192) 00:13:54.446 fused_ordering(193) 00:13:54.446 fused_ordering(194) 00:13:54.446 fused_ordering(195) 00:13:54.446 fused_ordering(196) 00:13:54.446 fused_ordering(197) 00:13:54.446 fused_ordering(198) 00:13:54.446 fused_ordering(199) 00:13:54.446 fused_ordering(200) 00:13:54.446 fused_ordering(201) 00:13:54.446 fused_ordering(202) 00:13:54.446 fused_ordering(203) 00:13:54.446 fused_ordering(204) 00:13:54.446 fused_ordering(205) 00:13:54.707 fused_ordering(206) 00:13:54.707 fused_ordering(207) 00:13:54.707 fused_ordering(208) 00:13:54.707 fused_ordering(209) 00:13:54.707 fused_ordering(210) 00:13:54.707 fused_ordering(211) 00:13:54.707 fused_ordering(212) 00:13:54.707 fused_ordering(213) 00:13:54.707 fused_ordering(214) 00:13:54.707 fused_ordering(215) 00:13:54.707 fused_ordering(216) 00:13:54.707 fused_ordering(217) 00:13:54.707 fused_ordering(218) 00:13:54.707 fused_ordering(219) 00:13:54.707 fused_ordering(220) 00:13:54.707 fused_ordering(221) 00:13:54.707 fused_ordering(222) 00:13:54.707 fused_ordering(223) 00:13:54.707 fused_ordering(224) 00:13:54.707 fused_ordering(225) 00:13:54.707 fused_ordering(226) 00:13:54.707 fused_ordering(227) 00:13:54.707 fused_ordering(228) 00:13:54.707 fused_ordering(229) 00:13:54.707 fused_ordering(230) 00:13:54.707 fused_ordering(231) 00:13:54.707 fused_ordering(232) 00:13:54.707 fused_ordering(233) 00:13:54.707 fused_ordering(234) 00:13:54.707 fused_ordering(235) 00:13:54.707 fused_ordering(236) 00:13:54.707 fused_ordering(237) 00:13:54.707 fused_ordering(238) 00:13:54.707 fused_ordering(239) 00:13:54.707 fused_ordering(240) 00:13:54.707 fused_ordering(241) 00:13:54.707 fused_ordering(242) 00:13:54.707 fused_ordering(243) 00:13:54.707 fused_ordering(244) 00:13:54.707 fused_ordering(245) 00:13:54.707 fused_ordering(246) 00:13:54.707 fused_ordering(247) 00:13:54.707 fused_ordering(248) 00:13:54.707 fused_ordering(249) 00:13:54.707 fused_ordering(250) 00:13:54.707 fused_ordering(251) 00:13:54.707 fused_ordering(252) 00:13:54.707 fused_ordering(253) 00:13:54.707 fused_ordering(254) 00:13:54.707 fused_ordering(255) 00:13:54.707 fused_ordering(256) 00:13:54.707 fused_ordering(257) 00:13:54.707 fused_ordering(258) 00:13:54.707 fused_ordering(259) 00:13:54.707 fused_ordering(260) 00:13:54.707 fused_ordering(261) 00:13:54.707 fused_ordering(262) 00:13:54.707 fused_ordering(263) 00:13:54.707 fused_ordering(264) 00:13:54.707 fused_ordering(265) 00:13:54.707 fused_ordering(266) 00:13:54.707 fused_ordering(267) 00:13:54.707 fused_ordering(268) 00:13:54.707 fused_ordering(269) 00:13:54.707 fused_ordering(270) 00:13:54.707 fused_ordering(271) 00:13:54.707 fused_ordering(272) 00:13:54.707 fused_ordering(273) 00:13:54.707 fused_ordering(274) 00:13:54.707 fused_ordering(275) 00:13:54.707 fused_ordering(276) 00:13:54.707 fused_ordering(277) 00:13:54.707 fused_ordering(278) 00:13:54.707 fused_ordering(279) 00:13:54.707 fused_ordering(280) 00:13:54.707 fused_ordering(281) 00:13:54.707 fused_ordering(282) 00:13:54.707 fused_ordering(283) 00:13:54.707 fused_ordering(284) 00:13:54.707 fused_ordering(285) 00:13:54.707 fused_ordering(286) 00:13:54.707 fused_ordering(287) 00:13:54.707 fused_ordering(288) 00:13:54.707 fused_ordering(289) 00:13:54.707 fused_ordering(290) 00:13:54.707 fused_ordering(291) 00:13:54.707 fused_ordering(292) 00:13:54.707 fused_ordering(293) 00:13:54.707 fused_ordering(294) 00:13:54.707 fused_ordering(295) 00:13:54.707 fused_ordering(296) 00:13:54.707 fused_ordering(297) 00:13:54.707 fused_ordering(298) 00:13:54.707 fused_ordering(299) 00:13:54.707 fused_ordering(300) 00:13:54.707 fused_ordering(301) 00:13:54.707 fused_ordering(302) 00:13:54.707 fused_ordering(303) 00:13:54.707 fused_ordering(304) 00:13:54.707 fused_ordering(305) 00:13:54.707 fused_ordering(306) 00:13:54.707 fused_ordering(307) 00:13:54.707 fused_ordering(308) 00:13:54.707 fused_ordering(309) 00:13:54.707 fused_ordering(310) 00:13:54.707 fused_ordering(311) 00:13:54.707 fused_ordering(312) 00:13:54.707 fused_ordering(313) 00:13:54.707 fused_ordering(314) 00:13:54.707 fused_ordering(315) 00:13:54.707 fused_ordering(316) 00:13:54.707 fused_ordering(317) 00:13:54.707 fused_ordering(318) 00:13:54.707 fused_ordering(319) 00:13:54.707 fused_ordering(320) 00:13:54.707 fused_ordering(321) 00:13:54.707 fused_ordering(322) 00:13:54.707 fused_ordering(323) 00:13:54.707 fused_ordering(324) 00:13:54.707 fused_ordering(325) 00:13:54.707 fused_ordering(326) 00:13:54.707 fused_ordering(327) 00:13:54.707 fused_ordering(328) 00:13:54.707 fused_ordering(329) 00:13:54.707 fused_ordering(330) 00:13:54.707 fused_ordering(331) 00:13:54.707 fused_ordering(332) 00:13:54.707 fused_ordering(333) 00:13:54.707 fused_ordering(334) 00:13:54.707 fused_ordering(335) 00:13:54.707 fused_ordering(336) 00:13:54.707 fused_ordering(337) 00:13:54.707 fused_ordering(338) 00:13:54.707 fused_ordering(339) 00:13:54.707 fused_ordering(340) 00:13:54.707 fused_ordering(341) 00:13:54.707 fused_ordering(342) 00:13:54.707 fused_ordering(343) 00:13:54.707 fused_ordering(344) 00:13:54.707 fused_ordering(345) 00:13:54.707 fused_ordering(346) 00:13:54.707 fused_ordering(347) 00:13:54.707 fused_ordering(348) 00:13:54.707 fused_ordering(349) 00:13:54.707 fused_ordering(350) 00:13:54.707 fused_ordering(351) 00:13:54.707 fused_ordering(352) 00:13:54.707 fused_ordering(353) 00:13:54.707 fused_ordering(354) 00:13:54.707 fused_ordering(355) 00:13:54.707 fused_ordering(356) 00:13:54.707 fused_ordering(357) 00:13:54.707 fused_ordering(358) 00:13:54.707 fused_ordering(359) 00:13:54.707 fused_ordering(360) 00:13:54.707 fused_ordering(361) 00:13:54.707 fused_ordering(362) 00:13:54.707 fused_ordering(363) 00:13:54.707 fused_ordering(364) 00:13:54.707 fused_ordering(365) 00:13:54.707 fused_ordering(366) 00:13:54.707 fused_ordering(367) 00:13:54.707 fused_ordering(368) 00:13:54.707 fused_ordering(369) 00:13:54.707 fused_ordering(370) 00:13:54.707 fused_ordering(371) 00:13:54.707 fused_ordering(372) 00:13:54.707 fused_ordering(373) 00:13:54.707 fused_ordering(374) 00:13:54.707 fused_ordering(375) 00:13:54.707 fused_ordering(376) 00:13:54.707 fused_ordering(377) 00:13:54.707 fused_ordering(378) 00:13:54.707 fused_ordering(379) 00:13:54.707 fused_ordering(380) 00:13:54.707 fused_ordering(381) 00:13:54.707 fused_ordering(382) 00:13:54.707 fused_ordering(383) 00:13:54.707 fused_ordering(384) 00:13:54.707 fused_ordering(385) 00:13:54.707 fused_ordering(386) 00:13:54.707 fused_ordering(387) 00:13:54.707 fused_ordering(388) 00:13:54.708 fused_ordering(389) 00:13:54.708 fused_ordering(390) 00:13:54.708 fused_ordering(391) 00:13:54.708 fused_ordering(392) 00:13:54.708 fused_ordering(393) 00:13:54.708 fused_ordering(394) 00:13:54.708 fused_ordering(395) 00:13:54.708 fused_ordering(396) 00:13:54.708 fused_ordering(397) 00:13:54.708 fused_ordering(398) 00:13:54.708 fused_ordering(399) 00:13:54.708 fused_ordering(400) 00:13:54.708 fused_ordering(401) 00:13:54.708 fused_ordering(402) 00:13:54.708 fused_ordering(403) 00:13:54.708 fused_ordering(404) 00:13:54.708 fused_ordering(405) 00:13:54.708 fused_ordering(406) 00:13:54.708 fused_ordering(407) 00:13:54.708 fused_ordering(408) 00:13:54.708 fused_ordering(409) 00:13:54.708 fused_ordering(410) 00:13:54.966 fused_ordering(411) 00:13:54.966 fused_ordering(412) 00:13:54.966 fused_ordering(413) 00:13:54.966 fused_ordering(414) 00:13:54.966 fused_ordering(415) 00:13:54.966 fused_ordering(416) 00:13:54.966 fused_ordering(417) 00:13:54.966 fused_ordering(418) 00:13:54.966 fused_ordering(419) 00:13:54.966 fused_ordering(420) 00:13:54.966 fused_ordering(421) 00:13:54.966 fused_ordering(422) 00:13:54.966 fused_ordering(423) 00:13:54.966 fused_ordering(424) 00:13:54.966 fused_ordering(425) 00:13:54.966 fused_ordering(426) 00:13:54.966 fused_ordering(427) 00:13:54.966 fused_ordering(428) 00:13:54.966 fused_ordering(429) 00:13:54.966 fused_ordering(430) 00:13:54.966 fused_ordering(431) 00:13:54.966 fused_ordering(432) 00:13:54.966 fused_ordering(433) 00:13:54.966 fused_ordering(434) 00:13:54.966 fused_ordering(435) 00:13:54.966 fused_ordering(436) 00:13:54.966 fused_ordering(437) 00:13:54.966 fused_ordering(438) 00:13:54.966 fused_ordering(439) 00:13:54.966 fused_ordering(440) 00:13:54.966 fused_ordering(441) 00:13:54.966 fused_ordering(442) 00:13:54.966 fused_ordering(443) 00:13:54.966 fused_ordering(444) 00:13:54.966 fused_ordering(445) 00:13:54.966 fused_ordering(446) 00:13:54.966 fused_ordering(447) 00:13:54.966 fused_ordering(448) 00:13:54.966 fused_ordering(449) 00:13:54.966 fused_ordering(450) 00:13:54.966 fused_ordering(451) 00:13:54.966 fused_ordering(452) 00:13:54.966 fused_ordering(453) 00:13:54.966 fused_ordering(454) 00:13:54.966 fused_ordering(455) 00:13:54.966 fused_ordering(456) 00:13:54.966 fused_ordering(457) 00:13:54.966 fused_ordering(458) 00:13:54.966 fused_ordering(459) 00:13:54.966 fused_ordering(460) 00:13:54.967 fused_ordering(461) 00:13:54.967 fused_ordering(462) 00:13:54.967 fused_ordering(463) 00:13:54.967 fused_ordering(464) 00:13:54.967 fused_ordering(465) 00:13:54.967 fused_ordering(466) 00:13:54.967 fused_ordering(467) 00:13:54.967 fused_ordering(468) 00:13:54.967 fused_ordering(469) 00:13:54.967 fused_ordering(470) 00:13:54.967 fused_ordering(471) 00:13:54.967 fused_ordering(472) 00:13:54.967 fused_ordering(473) 00:13:54.967 fused_ordering(474) 00:13:54.967 fused_ordering(475) 00:13:54.967 fused_ordering(476) 00:13:54.967 fused_ordering(477) 00:13:54.967 fused_ordering(478) 00:13:54.967 fused_ordering(479) 00:13:54.967 fused_ordering(480) 00:13:54.967 fused_ordering(481) 00:13:54.967 fused_ordering(482) 00:13:54.967 fused_ordering(483) 00:13:54.967 fused_ordering(484) 00:13:54.967 fused_ordering(485) 00:13:54.967 fused_ordering(486) 00:13:54.967 fused_ordering(487) 00:13:54.967 fused_ordering(488) 00:13:54.967 fused_ordering(489) 00:13:54.967 fused_ordering(490) 00:13:54.967 fused_ordering(491) 00:13:54.967 fused_ordering(492) 00:13:54.967 fused_ordering(493) 00:13:54.967 fused_ordering(494) 00:13:54.967 fused_ordering(495) 00:13:54.967 fused_ordering(496) 00:13:54.967 fused_ordering(497) 00:13:54.967 fused_ordering(498) 00:13:54.967 fused_ordering(499) 00:13:54.967 fused_ordering(500) 00:13:54.967 fused_ordering(501) 00:13:54.967 fused_ordering(502) 00:13:54.967 fused_ordering(503) 00:13:54.967 fused_ordering(504) 00:13:54.967 fused_ordering(505) 00:13:54.967 fused_ordering(506) 00:13:54.967 fused_ordering(507) 00:13:54.967 fused_ordering(508) 00:13:54.967 fused_ordering(509) 00:13:54.967 fused_ordering(510) 00:13:54.967 fused_ordering(511) 00:13:54.967 fused_ordering(512) 00:13:54.967 fused_ordering(513) 00:13:54.967 fused_ordering(514) 00:13:54.967 fused_ordering(515) 00:13:54.967 fused_ordering(516) 00:13:54.967 fused_ordering(517) 00:13:54.967 fused_ordering(518) 00:13:54.967 fused_ordering(519) 00:13:54.967 fused_ordering(520) 00:13:54.967 fused_ordering(521) 00:13:54.967 fused_ordering(522) 00:13:54.967 fused_ordering(523) 00:13:54.967 fused_ordering(524) 00:13:54.967 fused_ordering(525) 00:13:54.967 fused_ordering(526) 00:13:54.967 fused_ordering(527) 00:13:54.967 fused_ordering(528) 00:13:54.967 fused_ordering(529) 00:13:54.967 fused_ordering(530) 00:13:54.967 fused_ordering(531) 00:13:54.967 fused_ordering(532) 00:13:54.967 fused_ordering(533) 00:13:54.967 fused_ordering(534) 00:13:54.967 fused_ordering(535) 00:13:54.967 fused_ordering(536) 00:13:54.967 fused_ordering(537) 00:13:54.967 fused_ordering(538) 00:13:54.967 fused_ordering(539) 00:13:54.967 fused_ordering(540) 00:13:54.967 fused_ordering(541) 00:13:54.967 fused_ordering(542) 00:13:54.967 fused_ordering(543) 00:13:54.967 fused_ordering(544) 00:13:54.967 fused_ordering(545) 00:13:54.967 fused_ordering(546) 00:13:54.967 fused_ordering(547) 00:13:54.967 fused_ordering(548) 00:13:54.967 fused_ordering(549) 00:13:54.967 fused_ordering(550) 00:13:54.967 fused_ordering(551) 00:13:54.967 fused_ordering(552) 00:13:54.967 fused_ordering(553) 00:13:54.967 fused_ordering(554) 00:13:54.967 fused_ordering(555) 00:13:54.967 fused_ordering(556) 00:13:54.967 fused_ordering(557) 00:13:54.967 fused_ordering(558) 00:13:54.967 fused_ordering(559) 00:13:54.967 fused_ordering(560) 00:13:54.967 fused_ordering(561) 00:13:54.967 fused_ordering(562) 00:13:54.967 fused_ordering(563) 00:13:54.967 fused_ordering(564) 00:13:54.967 fused_ordering(565) 00:13:54.967 fused_ordering(566) 00:13:54.967 fused_ordering(567) 00:13:54.967 fused_ordering(568) 00:13:54.967 fused_ordering(569) 00:13:54.967 fused_ordering(570) 00:13:54.967 fused_ordering(571) 00:13:54.967 fused_ordering(572) 00:13:54.967 fused_ordering(573) 00:13:54.967 fused_ordering(574) 00:13:54.967 fused_ordering(575) 00:13:54.967 fused_ordering(576) 00:13:54.967 fused_ordering(577) 00:13:54.967 fused_ordering(578) 00:13:54.967 fused_ordering(579) 00:13:54.967 fused_ordering(580) 00:13:54.967 fused_ordering(581) 00:13:54.967 fused_ordering(582) 00:13:54.967 fused_ordering(583) 00:13:54.967 fused_ordering(584) 00:13:54.967 fused_ordering(585) 00:13:54.967 fused_ordering(586) 00:13:54.967 fused_ordering(587) 00:13:54.967 fused_ordering(588) 00:13:54.967 fused_ordering(589) 00:13:54.967 fused_ordering(590) 00:13:54.967 fused_ordering(591) 00:13:54.967 fused_ordering(592) 00:13:54.967 fused_ordering(593) 00:13:54.967 fused_ordering(594) 00:13:54.967 fused_ordering(595) 00:13:54.967 fused_ordering(596) 00:13:54.967 fused_ordering(597) 00:13:54.967 fused_ordering(598) 00:13:54.967 fused_ordering(599) 00:13:54.967 fused_ordering(600) 00:13:54.967 fused_ordering(601) 00:13:54.967 fused_ordering(602) 00:13:54.967 fused_ordering(603) 00:13:54.967 fused_ordering(604) 00:13:54.967 fused_ordering(605) 00:13:54.967 fused_ordering(606) 00:13:54.967 fused_ordering(607) 00:13:54.967 fused_ordering(608) 00:13:54.967 fused_ordering(609) 00:13:54.967 fused_ordering(610) 00:13:54.967 fused_ordering(611) 00:13:54.967 fused_ordering(612) 00:13:54.967 fused_ordering(613) 00:13:54.967 fused_ordering(614) 00:13:54.967 fused_ordering(615) 00:13:55.534 fused_ordering(616) 00:13:55.534 fused_ordering(617) 00:13:55.534 fused_ordering(618) 00:13:55.534 fused_ordering(619) 00:13:55.534 fused_ordering(620) 00:13:55.535 fused_ordering(621) 00:13:55.535 fused_ordering(622) 00:13:55.535 fused_ordering(623) 00:13:55.535 fused_ordering(624) 00:13:55.535 fused_ordering(625) 00:13:55.535 fused_ordering(626) 00:13:55.535 fused_ordering(627) 00:13:55.535 fused_ordering(628) 00:13:55.535 fused_ordering(629) 00:13:55.535 fused_ordering(630) 00:13:55.535 fused_ordering(631) 00:13:55.535 fused_ordering(632) 00:13:55.535 fused_ordering(633) 00:13:55.535 fused_ordering(634) 00:13:55.535 fused_ordering(635) 00:13:55.535 fused_ordering(636) 00:13:55.535 fused_ordering(637) 00:13:55.535 fused_ordering(638) 00:13:55.535 fused_ordering(639) 00:13:55.535 fused_ordering(640) 00:13:55.535 fused_ordering(641) 00:13:55.535 fused_ordering(642) 00:13:55.535 fused_ordering(643) 00:13:55.535 fused_ordering(644) 00:13:55.535 fused_ordering(645) 00:13:55.535 fused_ordering(646) 00:13:55.535 fused_ordering(647) 00:13:55.535 fused_ordering(648) 00:13:55.535 fused_ordering(649) 00:13:55.535 fused_ordering(650) 00:13:55.535 fused_ordering(651) 00:13:55.535 fused_ordering(652) 00:13:55.535 fused_ordering(653) 00:13:55.535 fused_ordering(654) 00:13:55.535 fused_ordering(655) 00:13:55.535 fused_ordering(656) 00:13:55.535 fused_ordering(657) 00:13:55.535 fused_ordering(658) 00:13:55.535 fused_ordering(659) 00:13:55.535 fused_ordering(660) 00:13:55.535 fused_ordering(661) 00:13:55.535 fused_ordering(662) 00:13:55.535 fused_ordering(663) 00:13:55.535 fused_ordering(664) 00:13:55.535 fused_ordering(665) 00:13:55.535 fused_ordering(666) 00:13:55.535 fused_ordering(667) 00:13:55.535 fused_ordering(668) 00:13:55.535 fused_ordering(669) 00:13:55.535 fused_ordering(670) 00:13:55.535 fused_ordering(671) 00:13:55.535 fused_ordering(672) 00:13:55.535 fused_ordering(673) 00:13:55.535 fused_ordering(674) 00:13:55.535 fused_ordering(675) 00:13:55.535 fused_ordering(676) 00:13:55.535 fused_ordering(677) 00:13:55.535 fused_ordering(678) 00:13:55.535 fused_ordering(679) 00:13:55.535 fused_ordering(680) 00:13:55.535 fused_ordering(681) 00:13:55.535 fused_ordering(682) 00:13:55.535 fused_ordering(683) 00:13:55.535 fused_ordering(684) 00:13:55.535 fused_ordering(685) 00:13:55.535 fused_ordering(686) 00:13:55.535 fused_ordering(687) 00:13:55.535 fused_ordering(688) 00:13:55.535 fused_ordering(689) 00:13:55.535 fused_ordering(690) 00:13:55.535 fused_ordering(691) 00:13:55.535 fused_ordering(692) 00:13:55.535 fused_ordering(693) 00:13:55.535 fused_ordering(694) 00:13:55.535 fused_ordering(695) 00:13:55.535 fused_ordering(696) 00:13:55.535 fused_ordering(697) 00:13:55.535 fused_ordering(698) 00:13:55.535 fused_ordering(699) 00:13:55.535 fused_ordering(700) 00:13:55.535 fused_ordering(701) 00:13:55.535 fused_ordering(702) 00:13:55.535 fused_ordering(703) 00:13:55.535 fused_ordering(704) 00:13:55.535 fused_ordering(705) 00:13:55.535 fused_ordering(706) 00:13:55.535 fused_ordering(707) 00:13:55.535 fused_ordering(708) 00:13:55.535 fused_ordering(709) 00:13:55.535 fused_ordering(710) 00:13:55.535 fused_ordering(711) 00:13:55.535 fused_ordering(712) 00:13:55.535 fused_ordering(713) 00:13:55.535 fused_ordering(714) 00:13:55.535 fused_ordering(715) 00:13:55.535 fused_ordering(716) 00:13:55.535 fused_ordering(717) 00:13:55.535 fused_ordering(718) 00:13:55.535 fused_ordering(719) 00:13:55.535 fused_ordering(720) 00:13:55.535 fused_ordering(721) 00:13:55.535 fused_ordering(722) 00:13:55.535 fused_ordering(723) 00:13:55.535 fused_ordering(724) 00:13:55.535 fused_ordering(725) 00:13:55.535 fused_ordering(726) 00:13:55.535 fused_ordering(727) 00:13:55.535 fused_ordering(728) 00:13:55.535 fused_ordering(729) 00:13:55.535 fused_ordering(730) 00:13:55.535 fused_ordering(731) 00:13:55.535 fused_ordering(732) 00:13:55.535 fused_ordering(733) 00:13:55.535 fused_ordering(734) 00:13:55.535 fused_ordering(735) 00:13:55.535 fused_ordering(736) 00:13:55.535 fused_ordering(737) 00:13:55.535 fused_ordering(738) 00:13:55.535 fused_ordering(739) 00:13:55.535 fused_ordering(740) 00:13:55.535 fused_ordering(741) 00:13:55.535 fused_ordering(742) 00:13:55.535 fused_ordering(743) 00:13:55.535 fused_ordering(744) 00:13:55.535 fused_ordering(745) 00:13:55.535 fused_ordering(746) 00:13:55.535 fused_ordering(747) 00:13:55.535 fused_ordering(748) 00:13:55.535 fused_ordering(749) 00:13:55.535 fused_ordering(750) 00:13:55.535 fused_ordering(751) 00:13:55.535 fused_ordering(752) 00:13:55.535 fused_ordering(753) 00:13:55.535 fused_ordering(754) 00:13:55.535 fused_ordering(755) 00:13:55.535 fused_ordering(756) 00:13:55.535 fused_ordering(757) 00:13:55.535 fused_ordering(758) 00:13:55.535 fused_ordering(759) 00:13:55.535 fused_ordering(760) 00:13:55.535 fused_ordering(761) 00:13:55.535 fused_ordering(762) 00:13:55.535 fused_ordering(763) 00:13:55.535 fused_ordering(764) 00:13:55.535 fused_ordering(765) 00:13:55.535 fused_ordering(766) 00:13:55.535 fused_ordering(767) 00:13:55.535 fused_ordering(768) 00:13:55.535 fused_ordering(769) 00:13:55.535 fused_ordering(770) 00:13:55.535 fused_ordering(771) 00:13:55.535 fused_ordering(772) 00:13:55.535 fused_ordering(773) 00:13:55.535 fused_ordering(774) 00:13:55.535 fused_ordering(775) 00:13:55.535 fused_ordering(776) 00:13:55.535 fused_ordering(777) 00:13:55.535 fused_ordering(778) 00:13:55.535 fused_ordering(779) 00:13:55.535 fused_ordering(780) 00:13:55.535 fused_ordering(781) 00:13:55.535 fused_ordering(782) 00:13:55.535 fused_ordering(783) 00:13:55.535 fused_ordering(784) 00:13:55.535 fused_ordering(785) 00:13:55.535 fused_ordering(786) 00:13:55.535 fused_ordering(787) 00:13:55.535 fused_ordering(788) 00:13:55.535 fused_ordering(789) 00:13:55.535 fused_ordering(790) 00:13:55.535 fused_ordering(791) 00:13:55.535 fused_ordering(792) 00:13:55.535 fused_ordering(793) 00:13:55.535 fused_ordering(794) 00:13:55.535 fused_ordering(795) 00:13:55.535 fused_ordering(796) 00:13:55.535 fused_ordering(797) 00:13:55.535 fused_ordering(798) 00:13:55.535 fused_ordering(799) 00:13:55.535 fused_ordering(800) 00:13:55.535 fused_ordering(801) 00:13:55.535 fused_ordering(802) 00:13:55.535 fused_ordering(803) 00:13:55.535 fused_ordering(804) 00:13:55.535 fused_ordering(805) 00:13:55.535 fused_ordering(806) 00:13:55.535 fused_ordering(807) 00:13:55.535 fused_ordering(808) 00:13:55.535 fused_ordering(809) 00:13:55.535 fused_ordering(810) 00:13:55.535 fused_ordering(811) 00:13:55.535 fused_ordering(812) 00:13:55.535 fused_ordering(813) 00:13:55.535 fused_ordering(814) 00:13:55.535 fused_ordering(815) 00:13:55.535 fused_ordering(816) 00:13:55.535 fused_ordering(817) 00:13:55.535 fused_ordering(818) 00:13:55.535 fused_ordering(819) 00:13:55.535 fused_ordering(820) 00:13:55.794 fused_ordering(821) 00:13:55.794 fused_ordering(822) 00:13:55.794 fused_ordering(823) 00:13:55.794 fused_ordering(824) 00:13:55.794 fused_ordering(825) 00:13:55.794 fused_ordering(826) 00:13:55.794 fused_ordering(827) 00:13:55.794 fused_ordering(828) 00:13:55.794 fused_ordering(829) 00:13:55.794 fused_ordering(830) 00:13:55.794 fused_ordering(831) 00:13:55.794 fused_ordering(832) 00:13:55.794 fused_ordering(833) 00:13:55.794 fused_ordering(834) 00:13:55.794 fused_ordering(835) 00:13:55.794 fused_ordering(836) 00:13:55.794 fused_ordering(837) 00:13:55.794 fused_ordering(838) 00:13:55.794 fused_ordering(839) 00:13:55.794 fused_ordering(840) 00:13:55.794 fused_ordering(841) 00:13:55.794 fused_ordering(842) 00:13:55.794 fused_ordering(843) 00:13:55.794 fused_ordering(844) 00:13:55.794 fused_ordering(845) 00:13:55.794 fused_ordering(846) 00:13:55.794 fused_ordering(847) 00:13:55.794 fused_ordering(848) 00:13:55.794 fused_ordering(849) 00:13:55.794 fused_ordering(850) 00:13:55.794 fused_ordering(851) 00:13:55.794 fused_ordering(852) 00:13:55.794 fused_ordering(853) 00:13:55.794 fused_ordering(854) 00:13:55.794 fused_ordering(855) 00:13:55.794 fused_ordering(856) 00:13:55.794 fused_ordering(857) 00:13:55.794 fused_ordering(858) 00:13:55.794 fused_ordering(859) 00:13:55.794 fused_ordering(860) 00:13:55.794 fused_ordering(861) 00:13:55.794 fused_ordering(862) 00:13:55.794 fused_ordering(863) 00:13:55.794 fused_ordering(864) 00:13:55.794 fused_ordering(865) 00:13:55.794 fused_ordering(866) 00:13:55.794 fused_ordering(867) 00:13:55.794 fused_ordering(868) 00:13:55.794 fused_ordering(869) 00:13:55.794 fused_ordering(870) 00:13:55.794 fused_ordering(871) 00:13:55.794 fused_ordering(872) 00:13:55.794 fused_ordering(873) 00:13:55.794 fused_ordering(874) 00:13:55.794 fused_ordering(875) 00:13:55.794 fused_ordering(876) 00:13:55.794 fused_ordering(877) 00:13:55.794 fused_ordering(878) 00:13:55.794 fused_ordering(879) 00:13:55.794 fused_ordering(880) 00:13:55.794 fused_ordering(881) 00:13:55.794 fused_ordering(882) 00:13:55.794 fused_ordering(883) 00:13:55.794 fused_ordering(884) 00:13:55.794 fused_ordering(885) 00:13:55.794 fused_ordering(886) 00:13:55.794 fused_ordering(887) 00:13:55.794 fused_ordering(888) 00:13:55.794 fused_ordering(889) 00:13:55.794 fused_ordering(890) 00:13:55.794 fused_ordering(891) 00:13:55.794 fused_ordering(892) 00:13:55.794 fused_ordering(893) 00:13:55.794 fused_ordering(894) 00:13:55.794 fused_ordering(895) 00:13:55.794 fused_ordering(896) 00:13:55.794 fused_ordering(897) 00:13:55.794 fused_ordering(898) 00:13:55.794 fused_ordering(899) 00:13:55.794 fused_ordering(900) 00:13:55.794 fused_ordering(901) 00:13:55.794 fused_ordering(902) 00:13:55.794 fused_ordering(903) 00:13:55.794 fused_ordering(904) 00:13:55.794 fused_ordering(905) 00:13:55.794 fused_ordering(906) 00:13:55.794 fused_ordering(907) 00:13:55.794 fused_ordering(908) 00:13:55.794 fused_ordering(909) 00:13:55.794 fused_ordering(910) 00:13:55.795 fused_ordering(911) 00:13:55.795 fused_ordering(912) 00:13:55.795 fused_ordering(913) 00:13:55.795 fused_ordering(914) 00:13:55.795 fused_ordering(915) 00:13:55.795 fused_ordering(916) 00:13:55.795 fused_ordering(917) 00:13:55.795 fused_ordering(918) 00:13:55.795 fused_ordering(919) 00:13:55.795 fused_ordering(920) 00:13:55.795 fused_ordering(921) 00:13:55.795 fused_ordering(922) 00:13:55.795 fused_ordering(923) 00:13:55.795 fused_ordering(924) 00:13:55.795 fused_ordering(925) 00:13:55.795 fused_ordering(926) 00:13:55.795 fused_ordering(927) 00:13:55.795 fused_ordering(928) 00:13:55.795 fused_ordering(929) 00:13:55.795 fused_ordering(930) 00:13:55.795 fused_ordering(931) 00:13:55.795 fused_ordering(932) 00:13:55.795 fused_ordering(933) 00:13:55.795 fused_ordering(934) 00:13:55.795 fused_ordering(935) 00:13:55.795 fused_ordering(936) 00:13:55.795 fused_ordering(937) 00:13:55.795 fused_ordering(938) 00:13:55.795 fused_ordering(939) 00:13:55.795 fused_ordering(940) 00:13:55.795 fused_ordering(941) 00:13:55.795 fused_ordering(942) 00:13:55.795 fused_ordering(943) 00:13:55.795 fused_ordering(944) 00:13:55.795 fused_ordering(945) 00:13:55.795 fused_ordering(946) 00:13:55.795 fused_ordering(947) 00:13:55.795 fused_ordering(948) 00:13:55.795 fused_ordering(949) 00:13:55.795 fused_ordering(950) 00:13:55.795 fused_ordering(951) 00:13:55.795 fused_ordering(952) 00:13:55.795 fused_ordering(953) 00:13:55.795 fused_ordering(954) 00:13:55.795 fused_ordering(955) 00:13:55.795 fused_ordering(956) 00:13:55.795 fused_ordering(957) 00:13:55.795 fused_ordering(958) 00:13:55.795 fused_ordering(959) 00:13:55.795 fused_ordering(960) 00:13:55.795 fused_ordering(961) 00:13:55.795 fused_ordering(962) 00:13:55.795 fused_ordering(963) 00:13:55.795 fused_ordering(964) 00:13:55.795 fused_ordering(965) 00:13:55.795 fused_ordering(966) 00:13:55.795 fused_ordering(967) 00:13:55.795 fused_ordering(968) 00:13:55.795 fused_ordering(969) 00:13:55.795 fused_ordering(970) 00:13:55.795 fused_ordering(971) 00:13:55.795 fused_ordering(972) 00:13:55.795 fused_ordering(973) 00:13:55.795 fused_ordering(974) 00:13:55.795 fused_ordering(975) 00:13:55.795 fused_ordering(976) 00:13:55.795 fused_ordering(977) 00:13:55.795 fused_ordering(978) 00:13:55.795 fused_ordering(979) 00:13:55.795 fused_ordering(980) 00:13:55.795 fused_ordering(981) 00:13:55.795 fused_ordering(982) 00:13:55.795 fused_ordering(983) 00:13:55.795 fused_ordering(984) 00:13:55.795 fused_ordering(985) 00:13:55.795 fused_ordering(986) 00:13:55.795 fused_ordering(987) 00:13:55.795 fused_ordering(988) 00:13:55.795 fused_ordering(989) 00:13:55.795 fused_ordering(990) 00:13:55.795 fused_ordering(991) 00:13:55.795 fused_ordering(992) 00:13:55.795 fused_ordering(993) 00:13:55.795 fused_ordering(994) 00:13:55.795 fused_ordering(995) 00:13:55.795 fused_ordering(996) 00:13:55.795 fused_ordering(997) 00:13:55.795 fused_ordering(998) 00:13:55.795 fused_ordering(999) 00:13:55.795 fused_ordering(1000) 00:13:55.795 fused_ordering(1001) 00:13:55.795 fused_ordering(1002) 00:13:55.795 fused_ordering(1003) 00:13:55.795 fused_ordering(1004) 00:13:55.795 fused_ordering(1005) 00:13:55.795 fused_ordering(1006) 00:13:55.795 fused_ordering(1007) 00:13:55.795 fused_ordering(1008) 00:13:55.795 fused_ordering(1009) 00:13:55.795 fused_ordering(1010) 00:13:55.795 fused_ordering(1011) 00:13:55.795 fused_ordering(1012) 00:13:55.795 fused_ordering(1013) 00:13:55.795 fused_ordering(1014) 00:13:55.795 fused_ordering(1015) 00:13:55.795 fused_ordering(1016) 00:13:55.795 fused_ordering(1017) 00:13:55.795 fused_ordering(1018) 00:13:55.795 fused_ordering(1019) 00:13:55.795 fused_ordering(1020) 00:13:55.795 fused_ordering(1021) 00:13:55.795 fused_ordering(1022) 00:13:55.795 fused_ordering(1023) 00:13:55.795 12:58:36 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:55.795 12:58:36 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:55.795 12:58:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:55.795 12:58:36 -- nvmf/common.sh@116 -- # sync 00:13:56.054 12:58:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:56.054 12:58:36 -- nvmf/common.sh@119 -- # set +e 00:13:56.054 12:58:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:56.054 12:58:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:56.054 rmmod nvme_tcp 00:13:56.054 rmmod nvme_fabrics 00:13:56.054 rmmod nvme_keyring 00:13:56.054 12:58:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:56.054 12:58:36 -- nvmf/common.sh@123 -- # set -e 00:13:56.054 12:58:36 -- nvmf/common.sh@124 -- # return 0 00:13:56.054 12:58:36 -- nvmf/common.sh@477 -- # '[' -n 81848 ']' 00:13:56.054 12:58:36 -- nvmf/common.sh@478 -- # killprocess 81848 00:13:56.054 12:58:36 -- common/autotest_common.sh@936 -- # '[' -z 81848 ']' 00:13:56.054 12:58:36 -- common/autotest_common.sh@940 -- # kill -0 81848 00:13:56.054 12:58:36 -- common/autotest_common.sh@941 -- # uname 00:13:56.054 12:58:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:56.054 12:58:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81848 00:13:56.054 killing process with pid 81848 00:13:56.054 12:58:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:56.054 12:58:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:56.054 12:58:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81848' 00:13:56.054 12:58:36 -- common/autotest_common.sh@955 -- # kill 81848 00:13:56.054 12:58:36 -- common/autotest_common.sh@960 -- # wait 81848 00:13:56.313 12:58:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:56.313 12:58:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:56.313 12:58:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:56.313 12:58:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.313 12:58:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:56.313 12:58:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.313 12:58:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.313 12:58:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.313 12:58:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:56.313 ************************************ 00:13:56.313 END TEST nvmf_fused_ordering 00:13:56.313 ************************************ 00:13:56.313 00:13:56.313 real 0m3.839s 00:13:56.313 user 0m4.528s 00:13:56.313 sys 0m1.229s 00:13:56.313 12:58:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:56.313 12:58:36 -- common/autotest_common.sh@10 -- # set +x 00:13:56.313 12:58:36 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:56.313 12:58:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:56.313 12:58:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:56.313 12:58:36 -- common/autotest_common.sh@10 -- # set +x 00:13:56.313 ************************************ 00:13:56.313 START TEST nvmf_delete_subsystem 00:13:56.313 ************************************ 00:13:56.313 12:58:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:56.313 * Looking for test storage... 00:13:56.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:56.313 12:58:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:56.313 12:58:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:56.313 12:58:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:56.572 12:58:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:56.572 12:58:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:56.572 12:58:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:56.572 12:58:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:56.572 12:58:37 -- scripts/common.sh@335 -- # IFS=.-: 00:13:56.572 12:58:37 -- scripts/common.sh@335 -- # read -ra ver1 00:13:56.572 12:58:37 -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.572 12:58:37 -- scripts/common.sh@336 -- # read -ra ver2 00:13:56.572 12:58:37 -- scripts/common.sh@337 -- # local 'op=<' 00:13:56.572 12:58:37 -- scripts/common.sh@339 -- # ver1_l=2 00:13:56.572 12:58:37 -- scripts/common.sh@340 -- # ver2_l=1 00:13:56.572 12:58:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:56.572 12:58:37 -- scripts/common.sh@343 -- # case "$op" in 00:13:56.572 12:58:37 -- scripts/common.sh@344 -- # : 1 00:13:56.572 12:58:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:56.572 12:58:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.572 12:58:37 -- scripts/common.sh@364 -- # decimal 1 00:13:56.572 12:58:37 -- scripts/common.sh@352 -- # local d=1 00:13:56.572 12:58:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.572 12:58:37 -- scripts/common.sh@354 -- # echo 1 00:13:56.572 12:58:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:56.572 12:58:37 -- scripts/common.sh@365 -- # decimal 2 00:13:56.572 12:58:37 -- scripts/common.sh@352 -- # local d=2 00:13:56.572 12:58:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.572 12:58:37 -- scripts/common.sh@354 -- # echo 2 00:13:56.572 12:58:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:56.572 12:58:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:56.572 12:58:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:56.572 12:58:37 -- scripts/common.sh@367 -- # return 0 00:13:56.572 12:58:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.572 12:58:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:56.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.573 --rc genhtml_branch_coverage=1 00:13:56.573 --rc genhtml_function_coverage=1 00:13:56.573 --rc genhtml_legend=1 00:13:56.573 --rc geninfo_all_blocks=1 00:13:56.573 --rc geninfo_unexecuted_blocks=1 00:13:56.573 00:13:56.573 ' 00:13:56.573 12:58:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:56.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.573 --rc genhtml_branch_coverage=1 00:13:56.573 --rc genhtml_function_coverage=1 00:13:56.573 --rc genhtml_legend=1 00:13:56.573 --rc geninfo_all_blocks=1 00:13:56.573 --rc geninfo_unexecuted_blocks=1 00:13:56.573 00:13:56.573 ' 00:13:56.573 12:58:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:56.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.573 --rc genhtml_branch_coverage=1 00:13:56.573 --rc genhtml_function_coverage=1 00:13:56.573 --rc genhtml_legend=1 00:13:56.573 --rc geninfo_all_blocks=1 00:13:56.573 --rc geninfo_unexecuted_blocks=1 00:13:56.573 00:13:56.573 ' 00:13:56.573 12:58:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:56.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.573 --rc genhtml_branch_coverage=1 00:13:56.573 --rc genhtml_function_coverage=1 00:13:56.573 --rc genhtml_legend=1 00:13:56.573 --rc geninfo_all_blocks=1 00:13:56.573 --rc geninfo_unexecuted_blocks=1 00:13:56.573 00:13:56.573 ' 00:13:56.573 12:58:37 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:56.573 12:58:37 -- nvmf/common.sh@7 -- # uname -s 00:13:56.573 12:58:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.573 12:58:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.573 12:58:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.573 12:58:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.573 12:58:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.573 12:58:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.573 12:58:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.573 12:58:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.573 12:58:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.573 12:58:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.573 12:58:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:13:56.573 12:58:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:13:56.573 12:58:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.573 12:58:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.573 12:58:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:56.573 12:58:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.573 12:58:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.573 12:58:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.573 12:58:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.573 12:58:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.573 12:58:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.573 12:58:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.573 12:58:37 -- paths/export.sh@5 -- # export PATH 00:13:56.573 12:58:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.573 12:58:37 -- nvmf/common.sh@46 -- # : 0 00:13:56.573 12:58:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:56.573 12:58:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:56.573 12:58:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:56.573 12:58:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.573 12:58:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.573 12:58:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:56.573 12:58:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:56.573 12:58:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:56.573 12:58:37 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:56.573 12:58:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:56.573 12:58:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.573 12:58:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:56.573 12:58:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:56.573 12:58:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:56.573 12:58:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.573 12:58:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.573 12:58:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.573 12:58:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:56.573 12:58:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:56.573 12:58:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:56.573 12:58:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:56.573 12:58:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:56.573 12:58:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:56.573 12:58:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.573 12:58:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.573 12:58:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:56.573 12:58:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:56.573 12:58:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:56.573 12:58:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:56.573 12:58:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:56.573 12:58:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.573 12:58:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:56.573 12:58:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:56.573 12:58:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:56.573 12:58:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:56.573 12:58:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:56.573 12:58:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:56.573 Cannot find device "nvmf_tgt_br" 00:13:56.573 12:58:37 -- nvmf/common.sh@154 -- # true 00:13:56.573 12:58:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:56.573 Cannot find device "nvmf_tgt_br2" 00:13:56.573 12:58:37 -- nvmf/common.sh@155 -- # true 00:13:56.573 12:58:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:56.573 12:58:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:56.573 Cannot find device "nvmf_tgt_br" 00:13:56.573 12:58:37 -- nvmf/common.sh@157 -- # true 00:13:56.573 12:58:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:56.573 Cannot find device "nvmf_tgt_br2" 00:13:56.573 12:58:37 -- nvmf/common.sh@158 -- # true 00:13:56.573 12:58:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:56.573 12:58:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:56.573 12:58:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.573 12:58:37 -- nvmf/common.sh@161 -- # true 00:13:56.573 12:58:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.573 12:58:37 -- nvmf/common.sh@162 -- # true 00:13:56.573 12:58:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:56.573 12:58:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:56.573 12:58:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:56.573 12:58:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:56.573 12:58:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:56.573 12:58:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:56.832 12:58:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:56.832 12:58:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:56.832 12:58:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:56.832 12:58:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:56.832 12:58:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:56.832 12:58:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:56.832 12:58:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:56.832 12:58:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:56.832 12:58:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:56.832 12:58:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:56.832 12:58:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:56.832 12:58:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:56.832 12:58:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:56.832 12:58:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:56.832 12:58:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:56.832 12:58:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:56.832 12:58:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.832 12:58:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:56.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:13:56.832 00:13:56.832 --- 10.0.0.2 ping statistics --- 00:13:56.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.832 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:56.832 12:58:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:56.832 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.832 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:13:56.832 00:13:56.832 --- 10.0.0.3 ping statistics --- 00:13:56.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.832 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:56.832 12:58:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:56.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:56.832 00:13:56.832 --- 10.0.0.1 ping statistics --- 00:13:56.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.832 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:56.832 12:58:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.832 12:58:37 -- nvmf/common.sh@421 -- # return 0 00:13:56.832 12:58:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:56.832 12:58:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.832 12:58:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:56.832 12:58:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:56.832 12:58:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.832 12:58:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:56.832 12:58:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:56.832 12:58:37 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:56.832 12:58:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:56.832 12:58:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.832 12:58:37 -- common/autotest_common.sh@10 -- # set +x 00:13:56.832 12:58:37 -- nvmf/common.sh@469 -- # nvmfpid=82122 00:13:56.832 12:58:37 -- nvmf/common.sh@470 -- # waitforlisten 82122 00:13:56.832 12:58:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:56.832 12:58:37 -- common/autotest_common.sh@829 -- # '[' -z 82122 ']' 00:13:56.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.832 12:58:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.832 12:58:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.833 12:58:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.833 12:58:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.833 12:58:37 -- common/autotest_common.sh@10 -- # set +x 00:13:56.833 [2024-12-13 12:58:37.544377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:56.833 [2024-12-13 12:58:37.544466] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.091 [2024-12-13 12:58:37.681135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:57.091 [2024-12-13 12:58:37.744785] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:57.091 [2024-12-13 12:58:37.745154] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.091 [2024-12-13 12:58:37.745202] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.091 [2024-12-13 12:58:37.745317] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.091 [2024-12-13 12:58:37.745478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.091 [2024-12-13 12:58:37.745487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.028 12:58:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.028 12:58:38 -- common/autotest_common.sh@862 -- # return 0 00:13:58.028 12:58:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:58.028 12:58:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:58.028 12:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 12:58:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.028 12:58:38 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.028 12:58:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.028 12:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 [2024-12-13 12:58:38.577847] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.028 12:58:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.028 12:58:38 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.028 12:58:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.028 12:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 12:58:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.028 12:58:38 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.028 12:58:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.028 12:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 [2024-12-13 12:58:38.593997] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.028 12:58:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.028 12:58:38 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:58.028 12:58:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.028 12:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 NULL1 00:13:58.028 12:58:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.028 12:58:38 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:58.028 12:58:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.028 12:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 Delay0 00:13:58.028 12:58:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.028 12:58:38 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.028 12:58:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.028 12:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 12:58:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.028 12:58:38 -- target/delete_subsystem.sh@28 -- # perf_pid=82173 00:13:58.028 12:58:38 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:58.028 12:58:38 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:58.028 [2024-12-13 12:58:38.778386] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:59.932 12:58:40 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.932 12:58:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.932 12:58:40 -- common/autotest_common.sh@10 -- # set +x 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 [2024-12-13 12:58:40.815038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7082f0 is same with the state(5) to be set 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Read completed with error (sct=0, sc=8) 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.191 starting I/O failed: -6 00:14:00.191 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 Read completed with error (sct=0, sc=8) 00:14:00.192 Write completed with error (sct=0, sc=8) 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:00.192 starting I/O failed: -6 00:14:01.130 [2024-12-13 12:58:41.790639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x708040 is same with the state(5) to be set 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 [2024-12-13 12:58:41.811887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x745360 is same with the state(5) to be set 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 [2024-12-13 12:58:41.812056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7458c0 is same with the state(5) to be set 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Read completed with error (sct=0, sc=8) 00:14:01.130 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 [2024-12-13 12:58:41.812665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6db800bf20 is same with the state(5) to be set 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Write completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 Read completed with error (sct=0, sc=8) 00:14:01.131 [2024-12-13 12:58:41.812939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6db800c600 is same with the state(5) to be set 00:14:01.131 [2024-12-13 12:58:41.813875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x708040 (9): Bad file descriptor 00:14:01.131 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:01.131 12:58:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.131 12:58:41 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:01.131 12:58:41 -- target/delete_subsystem.sh@35 -- # kill -0 82173 00:14:01.131 12:58:41 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:01.131 Initializing NVMe Controllers 00:14:01.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.131 Controller IO queue size 128, less than required. 00:14:01.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:01.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:01.131 Initialization complete. Launching workers. 00:14:01.131 ======================================================== 00:14:01.131 Latency(us) 00:14:01.131 Device Information : IOPS MiB/s Average min max 00:14:01.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.38 0.08 896737.72 1442.17 1012006.09 00:14:01.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.87 0.08 984560.89 867.88 1999466.18 00:14:01.131 ======================================================== 00:14:01.131 Total : 339.25 0.17 940713.60 867.88 1999466.18 00:14:01.131 00:14:01.698 12:58:42 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:01.698 12:58:42 -- target/delete_subsystem.sh@35 -- # kill -0 82173 00:14:01.698 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82173) - No such process 00:14:01.698 12:58:42 -- target/delete_subsystem.sh@45 -- # NOT wait 82173 00:14:01.698 12:58:42 -- common/autotest_common.sh@650 -- # local es=0 00:14:01.698 12:58:42 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82173 00:14:01.698 12:58:42 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:01.698 12:58:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.698 12:58:42 -- common/autotest_common.sh@642 -- # type -t wait 00:14:01.699 12:58:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.699 12:58:42 -- common/autotest_common.sh@653 -- # wait 82173 00:14:01.699 12:58:42 -- common/autotest_common.sh@653 -- # es=1 00:14:01.699 12:58:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:01.699 12:58:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:01.699 12:58:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:01.699 12:58:42 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:01.699 12:58:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.699 12:58:42 -- common/autotest_common.sh@10 -- # set +x 00:14:01.699 12:58:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.699 12:58:42 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.699 12:58:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.699 12:58:42 -- common/autotest_common.sh@10 -- # set +x 00:14:01.699 [2024-12-13 12:58:42.339880] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.699 12:58:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.699 12:58:42 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.699 12:58:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.699 12:58:42 -- common/autotest_common.sh@10 -- # set +x 00:14:01.699 12:58:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.699 12:58:42 -- target/delete_subsystem.sh@54 -- # perf_pid=82219 00:14:01.699 12:58:42 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:01.699 12:58:42 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:01.699 12:58:42 -- target/delete_subsystem.sh@57 -- # kill -0 82219 00:14:01.699 12:58:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:01.957 [2024-12-13 12:58:42.502646] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:02.216 12:58:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:02.216 12:58:42 -- target/delete_subsystem.sh@57 -- # kill -0 82219 00:14:02.216 12:58:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:02.783 12:58:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:02.783 12:58:43 -- target/delete_subsystem.sh@57 -- # kill -0 82219 00:14:02.783 12:58:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.350 12:58:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.350 12:58:43 -- target/delete_subsystem.sh@57 -- # kill -0 82219 00:14:03.350 12:58:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.609 12:58:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.609 12:58:44 -- target/delete_subsystem.sh@57 -- # kill -0 82219 00:14:03.609 12:58:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:04.176 12:58:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:04.176 12:58:44 -- target/delete_subsystem.sh@57 -- # kill -0 82219 00:14:04.176 12:58:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:04.742 12:58:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:04.742 12:58:45 -- target/delete_subsystem.sh@57 -- # kill -0 82219 00:14:04.742 12:58:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.001 Initializing NVMe Controllers 00:14:05.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.001 Controller IO queue size 128, less than required. 00:14:05.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:05.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:05.001 Initialization complete. Launching workers. 00:14:05.001 ======================================================== 00:14:05.001 Latency(us) 00:14:05.001 Device Information : IOPS MiB/s Average min max 00:14:05.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002597.81 1000099.61 1009033.73 00:14:05.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004668.76 1000473.92 1011450.54 00:14:05.001 ======================================================== 00:14:05.001 Total : 256.00 0.12 1003633.28 1000099.61 1011450.54 00:14:05.001 00:14:05.259 12:58:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.259 12:58:45 -- target/delete_subsystem.sh@57 -- # kill -0 82219 00:14:05.259 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82219) - No such process 00:14:05.259 12:58:45 -- target/delete_subsystem.sh@67 -- # wait 82219 00:14:05.259 12:58:45 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:05.259 12:58:45 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:05.259 12:58:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.259 12:58:45 -- nvmf/common.sh@116 -- # sync 00:14:05.259 12:58:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:05.259 12:58:45 -- nvmf/common.sh@119 -- # set +e 00:14:05.259 12:58:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.259 12:58:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:05.259 rmmod nvme_tcp 00:14:05.259 rmmod nvme_fabrics 00:14:05.259 rmmod nvme_keyring 00:14:05.260 12:58:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:05.260 12:58:45 -- nvmf/common.sh@123 -- # set -e 00:14:05.260 12:58:45 -- nvmf/common.sh@124 -- # return 0 00:14:05.260 12:58:45 -- nvmf/common.sh@477 -- # '[' -n 82122 ']' 00:14:05.260 12:58:45 -- nvmf/common.sh@478 -- # killprocess 82122 00:14:05.260 12:58:45 -- common/autotest_common.sh@936 -- # '[' -z 82122 ']' 00:14:05.260 12:58:45 -- common/autotest_common.sh@940 -- # kill -0 82122 00:14:05.260 12:58:45 -- common/autotest_common.sh@941 -- # uname 00:14:05.260 12:58:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.260 12:58:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82122 00:14:05.260 killing process with pid 82122 00:14:05.260 12:58:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.260 12:58:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.260 12:58:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82122' 00:14:05.260 12:58:46 -- common/autotest_common.sh@955 -- # kill 82122 00:14:05.260 12:58:46 -- common/autotest_common.sh@960 -- # wait 82122 00:14:05.519 12:58:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:05.519 12:58:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:05.519 12:58:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:05.519 12:58:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.519 12:58:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:05.519 12:58:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.519 12:58:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.519 12:58:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.519 12:58:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:05.519 00:14:05.519 real 0m9.276s 00:14:05.519 user 0m28.723s 00:14:05.519 sys 0m1.492s 00:14:05.519 12:58:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:05.519 12:58:46 -- common/autotest_common.sh@10 -- # set +x 00:14:05.519 ************************************ 00:14:05.519 END TEST nvmf_delete_subsystem 00:14:05.519 ************************************ 00:14:05.778 12:58:46 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:05.778 12:58:46 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:05.778 12:58:46 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:05.778 12:58:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:05.778 12:58:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.778 12:58:46 -- common/autotest_common.sh@10 -- # set +x 00:14:05.778 ************************************ 00:14:05.778 START TEST nvmf_host_management 00:14:05.778 ************************************ 00:14:05.778 12:58:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:05.778 * Looking for test storage... 00:14:05.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:05.778 12:58:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:05.778 12:58:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:05.778 12:58:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:05.778 12:58:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:05.778 12:58:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:05.778 12:58:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:05.778 12:58:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:05.778 12:58:46 -- scripts/common.sh@335 -- # IFS=.-: 00:14:05.778 12:58:46 -- scripts/common.sh@335 -- # read -ra ver1 00:14:05.778 12:58:46 -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.778 12:58:46 -- scripts/common.sh@336 -- # read -ra ver2 00:14:05.778 12:58:46 -- scripts/common.sh@337 -- # local 'op=<' 00:14:05.778 12:58:46 -- scripts/common.sh@339 -- # ver1_l=2 00:14:05.778 12:58:46 -- scripts/common.sh@340 -- # ver2_l=1 00:14:05.778 12:58:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:05.778 12:58:46 -- scripts/common.sh@343 -- # case "$op" in 00:14:05.778 12:58:46 -- scripts/common.sh@344 -- # : 1 00:14:05.778 12:58:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:05.778 12:58:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.778 12:58:46 -- scripts/common.sh@364 -- # decimal 1 00:14:05.778 12:58:46 -- scripts/common.sh@352 -- # local d=1 00:14:05.778 12:58:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.778 12:58:46 -- scripts/common.sh@354 -- # echo 1 00:14:05.778 12:58:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:05.778 12:58:46 -- scripts/common.sh@365 -- # decimal 2 00:14:05.779 12:58:46 -- scripts/common.sh@352 -- # local d=2 00:14:05.779 12:58:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.779 12:58:46 -- scripts/common.sh@354 -- # echo 2 00:14:05.779 12:58:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:05.779 12:58:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:05.779 12:58:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:05.779 12:58:46 -- scripts/common.sh@367 -- # return 0 00:14:05.779 12:58:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.779 12:58:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:05.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.779 --rc genhtml_branch_coverage=1 00:14:05.779 --rc genhtml_function_coverage=1 00:14:05.779 --rc genhtml_legend=1 00:14:05.779 --rc geninfo_all_blocks=1 00:14:05.779 --rc geninfo_unexecuted_blocks=1 00:14:05.779 00:14:05.779 ' 00:14:05.779 12:58:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:05.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.779 --rc genhtml_branch_coverage=1 00:14:05.779 --rc genhtml_function_coverage=1 00:14:05.779 --rc genhtml_legend=1 00:14:05.779 --rc geninfo_all_blocks=1 00:14:05.779 --rc geninfo_unexecuted_blocks=1 00:14:05.779 00:14:05.779 ' 00:14:05.779 12:58:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:05.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.779 --rc genhtml_branch_coverage=1 00:14:05.779 --rc genhtml_function_coverage=1 00:14:05.779 --rc genhtml_legend=1 00:14:05.779 --rc geninfo_all_blocks=1 00:14:05.779 --rc geninfo_unexecuted_blocks=1 00:14:05.779 00:14:05.779 ' 00:14:05.779 12:58:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:05.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.779 --rc genhtml_branch_coverage=1 00:14:05.779 --rc genhtml_function_coverage=1 00:14:05.779 --rc genhtml_legend=1 00:14:05.779 --rc geninfo_all_blocks=1 00:14:05.779 --rc geninfo_unexecuted_blocks=1 00:14:05.779 00:14:05.779 ' 00:14:05.779 12:58:46 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.779 12:58:46 -- nvmf/common.sh@7 -- # uname -s 00:14:05.779 12:58:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.779 12:58:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.779 12:58:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.779 12:58:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.779 12:58:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.779 12:58:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.779 12:58:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.779 12:58:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.779 12:58:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.779 12:58:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.779 12:58:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:14:05.779 12:58:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:14:05.779 12:58:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.779 12:58:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.779 12:58:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.779 12:58:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.779 12:58:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.779 12:58:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.779 12:58:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.779 12:58:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.779 12:58:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.779 12:58:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.779 12:58:46 -- paths/export.sh@5 -- # export PATH 00:14:05.779 12:58:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.779 12:58:46 -- nvmf/common.sh@46 -- # : 0 00:14:05.779 12:58:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:05.779 12:58:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:05.779 12:58:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:05.779 12:58:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.779 12:58:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.779 12:58:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:05.779 12:58:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:05.779 12:58:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:05.779 12:58:46 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:05.779 12:58:46 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:05.779 12:58:46 -- target/host_management.sh@104 -- # nvmftestinit 00:14:05.779 12:58:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:05.779 12:58:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.779 12:58:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:05.779 12:58:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:05.779 12:58:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:05.779 12:58:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.779 12:58:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.779 12:58:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.779 12:58:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:05.779 12:58:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:05.779 12:58:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:05.779 12:58:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:05.779 12:58:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:05.779 12:58:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:05.779 12:58:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.779 12:58:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.779 12:58:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.779 12:58:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:05.779 12:58:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.779 12:58:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.779 12:58:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.779 12:58:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.779 12:58:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.779 12:58:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.779 12:58:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.779 12:58:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.779 12:58:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:05.779 12:58:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:06.038 Cannot find device "nvmf_tgt_br" 00:14:06.038 12:58:46 -- nvmf/common.sh@154 -- # true 00:14:06.038 12:58:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.038 Cannot find device "nvmf_tgt_br2" 00:14:06.038 12:58:46 -- nvmf/common.sh@155 -- # true 00:14:06.038 12:58:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:06.038 12:58:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:06.038 Cannot find device "nvmf_tgt_br" 00:14:06.038 12:58:46 -- nvmf/common.sh@157 -- # true 00:14:06.038 12:58:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:06.038 Cannot find device "nvmf_tgt_br2" 00:14:06.038 12:58:46 -- nvmf/common.sh@158 -- # true 00:14:06.038 12:58:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:06.038 12:58:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:06.038 12:58:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.038 12:58:46 -- nvmf/common.sh@161 -- # true 00:14:06.038 12:58:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.038 12:58:46 -- nvmf/common.sh@162 -- # true 00:14:06.038 12:58:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:06.038 12:58:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:06.038 12:58:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:06.038 12:58:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:06.038 12:58:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.038 12:58:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.038 12:58:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.038 12:58:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:06.038 12:58:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:06.038 12:58:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:06.038 12:58:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:06.038 12:58:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:06.038 12:58:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:06.038 12:58:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:06.038 12:58:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:06.038 12:58:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:06.038 12:58:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:06.038 12:58:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:06.038 12:58:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:06.038 12:58:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:06.297 12:58:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:06.297 12:58:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.297 12:58:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.297 12:58:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:06.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:14:06.297 00:14:06.297 --- 10.0.0.2 ping statistics --- 00:14:06.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.297 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:06.297 12:58:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:06.297 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.297 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:14:06.297 00:14:06.297 --- 10.0.0.3 ping statistics --- 00:14:06.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.297 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:06.297 12:58:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:06.297 00:14:06.297 --- 10.0.0.1 ping statistics --- 00:14:06.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.297 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:06.297 12:58:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.297 12:58:46 -- nvmf/common.sh@421 -- # return 0 00:14:06.297 12:58:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:06.297 12:58:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.297 12:58:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:06.297 12:58:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:06.297 12:58:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.297 12:58:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:06.297 12:58:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:06.297 12:58:46 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:06.297 12:58:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:06.297 12:58:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.297 12:58:46 -- common/autotest_common.sh@10 -- # set +x 00:14:06.297 ************************************ 00:14:06.297 START TEST nvmf_host_management 00:14:06.298 ************************************ 00:14:06.298 12:58:46 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:06.298 12:58:46 -- target/host_management.sh@69 -- # starttarget 00:14:06.298 12:58:46 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:06.298 12:58:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:06.298 12:58:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.298 12:58:46 -- common/autotest_common.sh@10 -- # set +x 00:14:06.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.298 12:58:46 -- nvmf/common.sh@469 -- # nvmfpid=82458 00:14:06.298 12:58:46 -- nvmf/common.sh@470 -- # waitforlisten 82458 00:14:06.298 12:58:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:06.298 12:58:46 -- common/autotest_common.sh@829 -- # '[' -z 82458 ']' 00:14:06.298 12:58:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.298 12:58:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.298 12:58:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.298 12:58:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.298 12:58:46 -- common/autotest_common.sh@10 -- # set +x 00:14:06.298 [2024-12-13 12:58:46.954942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:06.298 [2024-12-13 12:58:46.955034] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.556 [2024-12-13 12:58:47.093011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.556 [2024-12-13 12:58:47.152665] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:06.556 [2024-12-13 12:58:47.153087] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.556 [2024-12-13 12:58:47.153150] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.556 [2024-12-13 12:58:47.153341] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.556 [2024-12-13 12:58:47.153600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.556 [2024-12-13 12:58:47.153914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:06.556 [2024-12-13 12:58:47.153915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.556 [2024-12-13 12:58:47.153801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.491 12:58:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.491 12:58:47 -- common/autotest_common.sh@862 -- # return 0 00:14:07.491 12:58:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:07.491 12:58:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:07.491 12:58:47 -- common/autotest_common.sh@10 -- # set +x 00:14:07.491 12:58:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.491 12:58:48 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.491 12:58:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.491 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:14:07.491 [2024-12-13 12:58:48.029490] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.491 12:58:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.491 12:58:48 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:07.491 12:58:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.491 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:14:07.491 12:58:48 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:07.491 12:58:48 -- target/host_management.sh@23 -- # cat 00:14:07.491 12:58:48 -- target/host_management.sh@30 -- # rpc_cmd 00:14:07.491 12:58:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.491 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:14:07.491 Malloc0 00:14:07.491 [2024-12-13 12:58:48.105309] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.491 12:58:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.491 12:58:48 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:07.491 12:58:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:07.491 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:14:07.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.491 12:58:48 -- target/host_management.sh@73 -- # perfpid=82530 00:14:07.491 12:58:48 -- target/host_management.sh@74 -- # waitforlisten 82530 /var/tmp/bdevperf.sock 00:14:07.491 12:58:48 -- common/autotest_common.sh@829 -- # '[' -z 82530 ']' 00:14:07.491 12:58:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.491 12:58:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.491 12:58:48 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:07.491 12:58:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.491 12:58:48 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:07.491 12:58:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.491 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:14:07.492 12:58:48 -- nvmf/common.sh@520 -- # config=() 00:14:07.492 12:58:48 -- nvmf/common.sh@520 -- # local subsystem config 00:14:07.492 12:58:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:07.492 12:58:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:07.492 { 00:14:07.492 "params": { 00:14:07.492 "name": "Nvme$subsystem", 00:14:07.492 "trtype": "$TEST_TRANSPORT", 00:14:07.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.492 "adrfam": "ipv4", 00:14:07.492 "trsvcid": "$NVMF_PORT", 00:14:07.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.492 "hdgst": ${hdgst:-false}, 00:14:07.492 "ddgst": ${ddgst:-false} 00:14:07.492 }, 00:14:07.492 "method": "bdev_nvme_attach_controller" 00:14:07.492 } 00:14:07.492 EOF 00:14:07.492 )") 00:14:07.492 12:58:48 -- nvmf/common.sh@542 -- # cat 00:14:07.492 12:58:48 -- nvmf/common.sh@544 -- # jq . 00:14:07.492 12:58:48 -- nvmf/common.sh@545 -- # IFS=, 00:14:07.492 12:58:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:07.492 "params": { 00:14:07.492 "name": "Nvme0", 00:14:07.492 "trtype": "tcp", 00:14:07.492 "traddr": "10.0.0.2", 00:14:07.492 "adrfam": "ipv4", 00:14:07.492 "trsvcid": "4420", 00:14:07.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:07.492 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:07.492 "hdgst": false, 00:14:07.492 "ddgst": false 00:14:07.492 }, 00:14:07.492 "method": "bdev_nvme_attach_controller" 00:14:07.492 }' 00:14:07.492 [2024-12-13 12:58:48.205326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:07.492 [2024-12-13 12:58:48.205422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82530 ] 00:14:07.750 [2024-12-13 12:58:48.345166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.750 [2024-12-13 12:58:48.401656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.008 Running I/O for 10 seconds... 00:14:08.578 12:58:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.578 12:58:49 -- common/autotest_common.sh@862 -- # return 0 00:14:08.578 12:58:49 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:08.578 12:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.578 12:58:49 -- common/autotest_common.sh@10 -- # set +x 00:14:08.578 12:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.578 12:58:49 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:08.578 12:58:49 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:08.578 12:58:49 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:08.578 12:58:49 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:08.578 12:58:49 -- target/host_management.sh@52 -- # local ret=1 00:14:08.578 12:58:49 -- target/host_management.sh@53 -- # local i 00:14:08.578 12:58:49 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:08.578 12:58:49 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:08.578 12:58:49 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:08.578 12:58:49 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:08.578 12:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.578 12:58:49 -- common/autotest_common.sh@10 -- # set +x 00:14:08.578 12:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.578 12:58:49 -- target/host_management.sh@55 -- # read_io_count=2279 00:14:08.578 12:58:49 -- target/host_management.sh@58 -- # '[' 2279 -ge 100 ']' 00:14:08.578 12:58:49 -- target/host_management.sh@59 -- # ret=0 00:14:08.578 12:58:49 -- target/host_management.sh@60 -- # break 00:14:08.578 12:58:49 -- target/host_management.sh@64 -- # return 0 00:14:08.578 12:58:49 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:08.578 12:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.578 12:58:49 -- common/autotest_common.sh@10 -- # set +x 00:14:08.578 [2024-12-13 12:58:49.242449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 [2024-12-13 12:58:49.242926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207530 is same with the state(5) to be set 00:14:08.578 task offset: 54016 on job bdev=Nvme0n1 fails 00:14:08.578 00:14:08.578 Latency(us) 00:14:08.578 [2024-12-13T12:58:49.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.578 [2024-12-13T12:58:49.354Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:08.578 [2024-12-13T12:58:49.354Z] Job: Nvme0n1 ended in about 0.68 seconds with error 00:14:08.578 Verification LBA range: start 0x0 length 0x400 00:14:08.578 Nvme0n1 : 0.68 3611.59 225.72 94.54 0.00 16980.14 2010.76 23116.33 00:14:08.578 [2024-12-13T12:58:49.354Z] =================================================================================================================== 00:14:08.578 [2024-12-13T12:58:49.354Z] Total : 3611.59 225.72 94.54 0.00 16980.14 2010.76 23116.33 00:14:08.578 [2024-12-13 12:58:49.243457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.578 [2024-12-13 12:58:49.243492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.578 [2024-12-13 12:58:49.243513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.578 [2024-12-13 12:58:49.243523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.578 [2024-12-13 12:58:49.243534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.578 [2024-12-13 12:58:49.243558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.578 [2024-12-13 12:58:49.243568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.578 [2024-12-13 12:58:49.243577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.243987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.243995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.579 [2024-12-13 12:58:49.244376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.579 [2024-12-13 12:58:49.244384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.580 [2024-12-13 12:58:49.244825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.244904] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15ab7c0 was disconnected and freed. reset controller. 00:14:08.580 [2024-12-13 12:58:49.244988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.580 [2024-12-13 12:58:49.245005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.245016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.580 [2024-12-13 12:58:49.245025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.245035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.580 [2024-12-13 12:58:49.245043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.245053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.580 [2024-12-13 12:58:49.245061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.245071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16122e0 is same with the state(5) to be set 00:14:08.580 [2024-12-13 12:58:49.246224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:08.580 [2024-12-13 12:58:49.248190] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:08.580 [2024-12-13 12:58:49.248212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16122e0 (9): Bad file descriptor 00:14:08.580 12:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.580 12:58:49 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:08.580 12:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.580 12:58:49 -- common/autotest_common.sh@10 -- # set +x 00:14:08.580 [2024-12-13 12:58:49.255843] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:08.580 [2024-12-13 12:58:49.255934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:08.580 [2024-12-13 12:58:49.255957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.580 [2024-12-13 12:58:49.255976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:08.580 [2024-12-13 12:58:49.255986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:08.580 [2024-12-13 12:58:49.255995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:08.580 [2024-12-13 12:58:49.256004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16122e0 00:14:08.580 [2024-12-13 12:58:49.256039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16122e0 (9): Bad file descriptor 00:14:08.580 [2024-12-13 12:58:49.256058] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:08.580 [2024-12-13 12:58:49.256068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:08.580 [2024-12-13 12:58:49.256078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:08.580 [2024-12-13 12:58:49.256095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:08.580 12:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.580 12:58:49 -- target/host_management.sh@87 -- # sleep 1 00:14:09.515 12:58:50 -- target/host_management.sh@91 -- # kill -9 82530 00:14:09.515 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82530) - No such process 00:14:09.515 12:58:50 -- target/host_management.sh@91 -- # true 00:14:09.515 12:58:50 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:09.515 12:58:50 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:09.515 12:58:50 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:09.515 12:58:50 -- nvmf/common.sh@520 -- # config=() 00:14:09.515 12:58:50 -- nvmf/common.sh@520 -- # local subsystem config 00:14:09.515 12:58:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:09.515 12:58:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:09.515 { 00:14:09.515 "params": { 00:14:09.515 "name": "Nvme$subsystem", 00:14:09.515 "trtype": "$TEST_TRANSPORT", 00:14:09.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:09.515 "adrfam": "ipv4", 00:14:09.515 "trsvcid": "$NVMF_PORT", 00:14:09.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:09.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:09.515 "hdgst": ${hdgst:-false}, 00:14:09.515 "ddgst": ${ddgst:-false} 00:14:09.515 }, 00:14:09.515 "method": "bdev_nvme_attach_controller" 00:14:09.515 } 00:14:09.515 EOF 00:14:09.515 )") 00:14:09.515 12:58:50 -- nvmf/common.sh@542 -- # cat 00:14:09.515 12:58:50 -- nvmf/common.sh@544 -- # jq . 00:14:09.515 12:58:50 -- nvmf/common.sh@545 -- # IFS=, 00:14:09.515 12:58:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:09.515 "params": { 00:14:09.515 "name": "Nvme0", 00:14:09.515 "trtype": "tcp", 00:14:09.515 "traddr": "10.0.0.2", 00:14:09.515 "adrfam": "ipv4", 00:14:09.515 "trsvcid": "4420", 00:14:09.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:09.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:09.515 "hdgst": false, 00:14:09.515 "ddgst": false 00:14:09.515 }, 00:14:09.515 "method": "bdev_nvme_attach_controller" 00:14:09.515 }' 00:14:09.773 [2024-12-13 12:58:50.321587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:09.773 [2024-12-13 12:58:50.321670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82586 ] 00:14:09.773 [2024-12-13 12:58:50.460264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.773 [2024-12-13 12:58:50.512726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.031 Running I/O for 1 seconds... 00:14:10.967 00:14:10.967 Latency(us) 00:14:10.967 [2024-12-13T12:58:51.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.967 [2024-12-13T12:58:51.743Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:10.967 Verification LBA range: start 0x0 length 0x400 00:14:10.967 Nvme0n1 : 1.01 3814.18 238.39 0.00 0.00 16492.88 1236.25 21328.99 00:14:10.967 [2024-12-13T12:58:51.743Z] =================================================================================================================== 00:14:10.967 [2024-12-13T12:58:51.743Z] Total : 3814.18 238.39 0.00 0.00 16492.88 1236.25 21328.99 00:14:11.225 12:58:51 -- target/host_management.sh@101 -- # stoptarget 00:14:11.225 12:58:51 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:11.225 12:58:51 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:11.225 12:58:51 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:11.225 12:58:51 -- target/host_management.sh@40 -- # nvmftestfini 00:14:11.225 12:58:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:11.225 12:58:51 -- nvmf/common.sh@116 -- # sync 00:14:11.225 12:58:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:11.225 12:58:51 -- nvmf/common.sh@119 -- # set +e 00:14:11.225 12:58:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:11.225 12:58:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:11.225 rmmod nvme_tcp 00:14:11.225 rmmod nvme_fabrics 00:14:11.225 rmmod nvme_keyring 00:14:11.225 12:58:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:11.483 12:58:52 -- nvmf/common.sh@123 -- # set -e 00:14:11.483 12:58:52 -- nvmf/common.sh@124 -- # return 0 00:14:11.483 12:58:52 -- nvmf/common.sh@477 -- # '[' -n 82458 ']' 00:14:11.483 12:58:52 -- nvmf/common.sh@478 -- # killprocess 82458 00:14:11.483 12:58:52 -- common/autotest_common.sh@936 -- # '[' -z 82458 ']' 00:14:11.483 12:58:52 -- common/autotest_common.sh@940 -- # kill -0 82458 00:14:11.483 12:58:52 -- common/autotest_common.sh@941 -- # uname 00:14:11.483 12:58:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.484 12:58:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82458 00:14:11.484 killing process with pid 82458 00:14:11.484 12:58:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:11.484 12:58:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:11.484 12:58:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82458' 00:14:11.484 12:58:52 -- common/autotest_common.sh@955 -- # kill 82458 00:14:11.484 12:58:52 -- common/autotest_common.sh@960 -- # wait 82458 00:14:11.484 [2024-12-13 12:58:52.221813] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:11.484 12:58:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:11.484 12:58:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:11.484 12:58:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:11.484 12:58:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.484 12:58:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:11.484 12:58:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.484 12:58:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.484 12:58:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.743 12:58:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:11.743 00:14:11.743 real 0m5.394s 00:14:11.743 user 0m22.784s 00:14:11.743 sys 0m1.257s 00:14:11.743 12:58:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:11.743 12:58:52 -- common/autotest_common.sh@10 -- # set +x 00:14:11.743 ************************************ 00:14:11.743 END TEST nvmf_host_management 00:14:11.743 ************************************ 00:14:11.743 12:58:52 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:11.743 00:14:11.743 real 0m6.018s 00:14:11.743 user 0m22.992s 00:14:11.743 sys 0m1.518s 00:14:11.743 12:58:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:11.743 12:58:52 -- common/autotest_common.sh@10 -- # set +x 00:14:11.743 ************************************ 00:14:11.743 END TEST nvmf_host_management 00:14:11.743 ************************************ 00:14:11.743 12:58:52 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:11.743 12:58:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:11.743 12:58:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.743 12:58:52 -- common/autotest_common.sh@10 -- # set +x 00:14:11.743 ************************************ 00:14:11.743 START TEST nvmf_lvol 00:14:11.743 ************************************ 00:14:11.743 12:58:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:11.743 * Looking for test storage... 00:14:11.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:11.743 12:58:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:11.743 12:58:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:11.743 12:58:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:11.743 12:58:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:11.743 12:58:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:11.743 12:58:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:11.743 12:58:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:11.743 12:58:52 -- scripts/common.sh@335 -- # IFS=.-: 00:14:11.743 12:58:52 -- scripts/common.sh@335 -- # read -ra ver1 00:14:11.743 12:58:52 -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.743 12:58:52 -- scripts/common.sh@336 -- # read -ra ver2 00:14:11.743 12:58:52 -- scripts/common.sh@337 -- # local 'op=<' 00:14:11.743 12:58:52 -- scripts/common.sh@339 -- # ver1_l=2 00:14:11.743 12:58:52 -- scripts/common.sh@340 -- # ver2_l=1 00:14:11.743 12:58:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:11.743 12:58:52 -- scripts/common.sh@343 -- # case "$op" in 00:14:11.743 12:58:52 -- scripts/common.sh@344 -- # : 1 00:14:11.743 12:58:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:11.743 12:58:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.743 12:58:52 -- scripts/common.sh@364 -- # decimal 1 00:14:11.743 12:58:52 -- scripts/common.sh@352 -- # local d=1 00:14:11.743 12:58:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.743 12:58:52 -- scripts/common.sh@354 -- # echo 1 00:14:11.743 12:58:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:12.001 12:58:52 -- scripts/common.sh@365 -- # decimal 2 00:14:12.002 12:58:52 -- scripts/common.sh@352 -- # local d=2 00:14:12.002 12:58:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.002 12:58:52 -- scripts/common.sh@354 -- # echo 2 00:14:12.002 12:58:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:12.002 12:58:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:12.002 12:58:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:12.002 12:58:52 -- scripts/common.sh@367 -- # return 0 00:14:12.002 12:58:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.002 12:58:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:12.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.002 --rc genhtml_branch_coverage=1 00:14:12.002 --rc genhtml_function_coverage=1 00:14:12.002 --rc genhtml_legend=1 00:14:12.002 --rc geninfo_all_blocks=1 00:14:12.002 --rc geninfo_unexecuted_blocks=1 00:14:12.002 00:14:12.002 ' 00:14:12.002 12:58:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:12.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.002 --rc genhtml_branch_coverage=1 00:14:12.002 --rc genhtml_function_coverage=1 00:14:12.002 --rc genhtml_legend=1 00:14:12.002 --rc geninfo_all_blocks=1 00:14:12.002 --rc geninfo_unexecuted_blocks=1 00:14:12.002 00:14:12.002 ' 00:14:12.002 12:58:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:12.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.002 --rc genhtml_branch_coverage=1 00:14:12.002 --rc genhtml_function_coverage=1 00:14:12.002 --rc genhtml_legend=1 00:14:12.002 --rc geninfo_all_blocks=1 00:14:12.002 --rc geninfo_unexecuted_blocks=1 00:14:12.002 00:14:12.002 ' 00:14:12.002 12:58:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:12.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.002 --rc genhtml_branch_coverage=1 00:14:12.002 --rc genhtml_function_coverage=1 00:14:12.002 --rc genhtml_legend=1 00:14:12.002 --rc geninfo_all_blocks=1 00:14:12.002 --rc geninfo_unexecuted_blocks=1 00:14:12.002 00:14:12.002 ' 00:14:12.002 12:58:52 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:12.002 12:58:52 -- nvmf/common.sh@7 -- # uname -s 00:14:12.002 12:58:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.002 12:58:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.002 12:58:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.002 12:58:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.002 12:58:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.002 12:58:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.002 12:58:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.002 12:58:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.002 12:58:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.002 12:58:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.002 12:58:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:14:12.002 12:58:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:14:12.002 12:58:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.002 12:58:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.002 12:58:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.002 12:58:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.002 12:58:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.002 12:58:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.002 12:58:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.002 12:58:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.002 12:58:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.002 12:58:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.002 12:58:52 -- paths/export.sh@5 -- # export PATH 00:14:12.002 12:58:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.002 12:58:52 -- nvmf/common.sh@46 -- # : 0 00:14:12.002 12:58:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:12.002 12:58:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:12.002 12:58:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:12.002 12:58:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.002 12:58:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.002 12:58:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:12.002 12:58:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:12.002 12:58:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:12.002 12:58:52 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.002 12:58:52 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.002 12:58:52 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:12.002 12:58:52 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:12.002 12:58:52 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.002 12:58:52 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:12.002 12:58:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:12.002 12:58:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.002 12:58:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:12.002 12:58:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:12.002 12:58:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:12.002 12:58:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.002 12:58:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.002 12:58:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.002 12:58:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:12.002 12:58:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:12.002 12:58:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:12.002 12:58:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:12.002 12:58:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:12.002 12:58:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:12.002 12:58:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.002 12:58:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.002 12:58:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:12.002 12:58:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:12.002 12:58:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:12.002 12:58:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:12.002 12:58:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:12.002 12:58:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.002 12:58:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:12.002 12:58:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:12.002 12:58:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:12.002 12:58:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:12.002 12:58:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:12.002 12:58:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:12.002 Cannot find device "nvmf_tgt_br" 00:14:12.002 12:58:52 -- nvmf/common.sh@154 -- # true 00:14:12.002 12:58:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.002 Cannot find device "nvmf_tgt_br2" 00:14:12.002 12:58:52 -- nvmf/common.sh@155 -- # true 00:14:12.002 12:58:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:12.002 12:58:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:12.002 Cannot find device "nvmf_tgt_br" 00:14:12.002 12:58:52 -- nvmf/common.sh@157 -- # true 00:14:12.002 12:58:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:12.002 Cannot find device "nvmf_tgt_br2" 00:14:12.002 12:58:52 -- nvmf/common.sh@158 -- # true 00:14:12.002 12:58:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:12.002 12:58:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:12.002 12:58:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.002 12:58:52 -- nvmf/common.sh@161 -- # true 00:14:12.002 12:58:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.002 12:58:52 -- nvmf/common.sh@162 -- # true 00:14:12.002 12:58:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:12.002 12:58:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:12.002 12:58:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:12.002 12:58:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.002 12:58:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.002 12:58:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.002 12:58:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.002 12:58:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:12.002 12:58:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:12.261 12:58:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:12.261 12:58:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:12.261 12:58:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:12.261 12:58:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:12.261 12:58:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.261 12:58:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.261 12:58:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:12.261 12:58:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:12.261 12:58:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:12.261 12:58:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:12.261 12:58:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:12.261 12:58:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:12.261 12:58:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:12.261 12:58:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:12.261 12:58:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:12.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:14:12.261 00:14:12.261 --- 10.0.0.2 ping statistics --- 00:14:12.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.261 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:12.261 12:58:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:12.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:12.261 00:14:12.261 --- 10.0.0.3 ping statistics --- 00:14:12.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.261 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:12.261 12:58:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:12.261 00:14:12.261 --- 10.0.0.1 ping statistics --- 00:14:12.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.261 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:12.261 12:58:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.261 12:58:52 -- nvmf/common.sh@421 -- # return 0 00:14:12.261 12:58:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:12.261 12:58:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.261 12:58:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:12.261 12:58:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:12.261 12:58:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.261 12:58:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:12.261 12:58:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:12.261 12:58:52 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:12.261 12:58:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:12.261 12:58:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.261 12:58:52 -- common/autotest_common.sh@10 -- # set +x 00:14:12.261 12:58:52 -- nvmf/common.sh@469 -- # nvmfpid=82812 00:14:12.261 12:58:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:12.261 12:58:52 -- nvmf/common.sh@470 -- # waitforlisten 82812 00:14:12.261 12:58:52 -- common/autotest_common.sh@829 -- # '[' -z 82812 ']' 00:14:12.261 12:58:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.261 12:58:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.261 12:58:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.261 12:58:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.261 12:58:52 -- common/autotest_common.sh@10 -- # set +x 00:14:12.261 [2024-12-13 12:58:52.952547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:12.261 [2024-12-13 12:58:52.952670] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.520 [2024-12-13 12:58:53.092935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:12.520 [2024-12-13 12:58:53.161326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:12.520 [2024-12-13 12:58:53.161465] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.520 [2024-12-13 12:58:53.161477] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.520 [2024-12-13 12:58:53.161484] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.520 [2024-12-13 12:58:53.161918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.520 [2024-12-13 12:58:53.161987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.520 [2024-12-13 12:58:53.161991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.454 12:58:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.454 12:58:53 -- common/autotest_common.sh@862 -- # return 0 00:14:13.454 12:58:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:13.454 12:58:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.454 12:58:53 -- common/autotest_common.sh@10 -- # set +x 00:14:13.454 12:58:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.454 12:58:53 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:13.454 [2024-12-13 12:58:54.202404] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.454 12:58:54 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:14.049 12:58:54 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:14.050 12:58:54 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:14.050 12:58:54 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:14.050 12:58:54 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:14.307 12:58:55 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:14.565 12:58:55 -- target/nvmf_lvol.sh@29 -- # lvs=1e1da7ed-3263-4ce8-9379-c556b8972cdf 00:14:14.565 12:58:55 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1e1da7ed-3263-4ce8-9379-c556b8972cdf lvol 20 00:14:14.823 12:58:55 -- target/nvmf_lvol.sh@32 -- # lvol=14d07b16-80ca-4868-9bd4-997c6096dac8 00:14:14.823 12:58:55 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:15.081 12:58:55 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 14d07b16-80ca-4868-9bd4-997c6096dac8 00:14:15.339 12:58:56 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:15.598 [2024-12-13 12:58:56.219428] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.598 12:58:56 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:15.857 12:58:56 -- target/nvmf_lvol.sh@42 -- # perf_pid=82965 00:14:15.857 12:58:56 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:15.857 12:58:56 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:16.792 12:58:57 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 14d07b16-80ca-4868-9bd4-997c6096dac8 MY_SNAPSHOT 00:14:17.359 12:58:57 -- target/nvmf_lvol.sh@47 -- # snapshot=cb18c5bc-ac51-4d6a-820a-1b347b6a7143 00:14:17.359 12:58:57 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 14d07b16-80ca-4868-9bd4-997c6096dac8 30 00:14:17.359 12:58:58 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone cb18c5bc-ac51-4d6a-820a-1b347b6a7143 MY_CLONE 00:14:17.925 12:58:58 -- target/nvmf_lvol.sh@49 -- # clone=e5ef82a8-1eec-45dc-949f-1b3f29bb26a9 00:14:17.925 12:58:58 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e5ef82a8-1eec-45dc-949f-1b3f29bb26a9 00:14:18.492 12:58:59 -- target/nvmf_lvol.sh@53 -- # wait 82965 00:14:26.604 Initializing NVMe Controllers 00:14:26.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:26.604 Controller IO queue size 128, less than required. 00:14:26.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:26.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:26.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:26.604 Initialization complete. Launching workers. 00:14:26.604 ======================================================== 00:14:26.604 Latency(us) 00:14:26.604 Device Information : IOPS MiB/s Average min max 00:14:26.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11886.00 46.43 10773.12 1483.63 105329.94 00:14:26.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11859.80 46.33 10800.51 2473.41 48409.26 00:14:26.604 ======================================================== 00:14:26.604 Total : 23745.80 92.76 10786.80 1483.63 105329.94 00:14:26.604 00:14:26.604 12:59:06 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:26.604 12:59:07 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 14d07b16-80ca-4868-9bd4-997c6096dac8 00:14:26.604 12:59:07 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e1da7ed-3263-4ce8-9379-c556b8972cdf 00:14:26.863 12:59:07 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:26.863 12:59:07 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:26.863 12:59:07 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:26.863 12:59:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:26.863 12:59:07 -- nvmf/common.sh@116 -- # sync 00:14:26.863 12:59:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:26.863 12:59:07 -- nvmf/common.sh@119 -- # set +e 00:14:26.863 12:59:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:26.863 12:59:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:26.863 rmmod nvme_tcp 00:14:26.863 rmmod nvme_fabrics 00:14:27.121 rmmod nvme_keyring 00:14:27.121 12:59:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:27.121 12:59:07 -- nvmf/common.sh@123 -- # set -e 00:14:27.121 12:59:07 -- nvmf/common.sh@124 -- # return 0 00:14:27.121 12:59:07 -- nvmf/common.sh@477 -- # '[' -n 82812 ']' 00:14:27.121 12:59:07 -- nvmf/common.sh@478 -- # killprocess 82812 00:14:27.121 12:59:07 -- common/autotest_common.sh@936 -- # '[' -z 82812 ']' 00:14:27.121 12:59:07 -- common/autotest_common.sh@940 -- # kill -0 82812 00:14:27.121 12:59:07 -- common/autotest_common.sh@941 -- # uname 00:14:27.121 12:59:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:27.121 12:59:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82812 00:14:27.121 12:59:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:27.121 12:59:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:27.122 killing process with pid 82812 00:14:27.122 12:59:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82812' 00:14:27.122 12:59:07 -- common/autotest_common.sh@955 -- # kill 82812 00:14:27.122 12:59:07 -- common/autotest_common.sh@960 -- # wait 82812 00:14:27.381 12:59:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:27.381 12:59:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:27.381 12:59:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:27.381 12:59:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.381 12:59:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:27.381 12:59:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.381 12:59:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.381 12:59:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.381 12:59:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:27.381 ************************************ 00:14:27.381 END TEST nvmf_lvol 00:14:27.381 ************************************ 00:14:27.381 00:14:27.381 real 0m15.601s 00:14:27.381 user 1m5.399s 00:14:27.381 sys 0m3.743s 00:14:27.381 12:59:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:27.381 12:59:07 -- common/autotest_common.sh@10 -- # set +x 00:14:27.381 12:59:08 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:27.381 12:59:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:27.381 12:59:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:27.381 12:59:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.381 ************************************ 00:14:27.381 START TEST nvmf_lvs_grow 00:14:27.381 ************************************ 00:14:27.381 12:59:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:27.381 * Looking for test storage... 00:14:27.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:27.381 12:59:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:27.381 12:59:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:27.381 12:59:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:27.640 12:59:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:27.640 12:59:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:27.640 12:59:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:27.640 12:59:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:27.640 12:59:08 -- scripts/common.sh@335 -- # IFS=.-: 00:14:27.640 12:59:08 -- scripts/common.sh@335 -- # read -ra ver1 00:14:27.640 12:59:08 -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.640 12:59:08 -- scripts/common.sh@336 -- # read -ra ver2 00:14:27.640 12:59:08 -- scripts/common.sh@337 -- # local 'op=<' 00:14:27.640 12:59:08 -- scripts/common.sh@339 -- # ver1_l=2 00:14:27.640 12:59:08 -- scripts/common.sh@340 -- # ver2_l=1 00:14:27.640 12:59:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:27.640 12:59:08 -- scripts/common.sh@343 -- # case "$op" in 00:14:27.640 12:59:08 -- scripts/common.sh@344 -- # : 1 00:14:27.640 12:59:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:27.640 12:59:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.640 12:59:08 -- scripts/common.sh@364 -- # decimal 1 00:14:27.640 12:59:08 -- scripts/common.sh@352 -- # local d=1 00:14:27.640 12:59:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.640 12:59:08 -- scripts/common.sh@354 -- # echo 1 00:14:27.640 12:59:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:27.640 12:59:08 -- scripts/common.sh@365 -- # decimal 2 00:14:27.640 12:59:08 -- scripts/common.sh@352 -- # local d=2 00:14:27.640 12:59:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.640 12:59:08 -- scripts/common.sh@354 -- # echo 2 00:14:27.640 12:59:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:27.640 12:59:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:27.640 12:59:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:27.640 12:59:08 -- scripts/common.sh@367 -- # return 0 00:14:27.640 12:59:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.640 12:59:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:27.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.640 --rc genhtml_branch_coverage=1 00:14:27.640 --rc genhtml_function_coverage=1 00:14:27.640 --rc genhtml_legend=1 00:14:27.640 --rc geninfo_all_blocks=1 00:14:27.640 --rc geninfo_unexecuted_blocks=1 00:14:27.640 00:14:27.640 ' 00:14:27.640 12:59:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:27.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.640 --rc genhtml_branch_coverage=1 00:14:27.640 --rc genhtml_function_coverage=1 00:14:27.640 --rc genhtml_legend=1 00:14:27.640 --rc geninfo_all_blocks=1 00:14:27.640 --rc geninfo_unexecuted_blocks=1 00:14:27.640 00:14:27.640 ' 00:14:27.640 12:59:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:27.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.640 --rc genhtml_branch_coverage=1 00:14:27.640 --rc genhtml_function_coverage=1 00:14:27.640 --rc genhtml_legend=1 00:14:27.640 --rc geninfo_all_blocks=1 00:14:27.640 --rc geninfo_unexecuted_blocks=1 00:14:27.640 00:14:27.640 ' 00:14:27.640 12:59:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:27.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.640 --rc genhtml_branch_coverage=1 00:14:27.640 --rc genhtml_function_coverage=1 00:14:27.640 --rc genhtml_legend=1 00:14:27.640 --rc geninfo_all_blocks=1 00:14:27.640 --rc geninfo_unexecuted_blocks=1 00:14:27.640 00:14:27.640 ' 00:14:27.640 12:59:08 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:27.640 12:59:08 -- nvmf/common.sh@7 -- # uname -s 00:14:27.640 12:59:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.640 12:59:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.640 12:59:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.640 12:59:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.640 12:59:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.640 12:59:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.640 12:59:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.640 12:59:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.640 12:59:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.640 12:59:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.640 12:59:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:14:27.640 12:59:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:14:27.640 12:59:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.640 12:59:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.640 12:59:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:27.640 12:59:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:27.640 12:59:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.640 12:59:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.640 12:59:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.640 12:59:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.640 12:59:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.640 12:59:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.640 12:59:08 -- paths/export.sh@5 -- # export PATH 00:14:27.640 12:59:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.640 12:59:08 -- nvmf/common.sh@46 -- # : 0 00:14:27.640 12:59:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:27.640 12:59:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:27.640 12:59:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:27.640 12:59:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.641 12:59:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.641 12:59:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:27.641 12:59:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:27.641 12:59:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:27.641 12:59:08 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.641 12:59:08 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.641 12:59:08 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:27.641 12:59:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:27.641 12:59:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.641 12:59:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:27.641 12:59:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:27.641 12:59:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:27.641 12:59:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.641 12:59:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.641 12:59:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.641 12:59:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:27.641 12:59:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:27.641 12:59:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:27.641 12:59:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:27.641 12:59:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:27.641 12:59:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:27.641 12:59:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.641 12:59:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.641 12:59:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:27.641 12:59:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:27.641 12:59:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:27.641 12:59:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:27.641 12:59:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:27.641 12:59:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.641 12:59:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:27.641 12:59:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:27.641 12:59:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:27.641 12:59:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:27.641 12:59:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:27.641 12:59:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:27.641 Cannot find device "nvmf_tgt_br" 00:14:27.641 12:59:08 -- nvmf/common.sh@154 -- # true 00:14:27.641 12:59:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:27.641 Cannot find device "nvmf_tgt_br2" 00:14:27.641 12:59:08 -- nvmf/common.sh@155 -- # true 00:14:27.641 12:59:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:27.641 12:59:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:27.641 Cannot find device "nvmf_tgt_br" 00:14:27.641 12:59:08 -- nvmf/common.sh@157 -- # true 00:14:27.641 12:59:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:27.641 Cannot find device "nvmf_tgt_br2" 00:14:27.641 12:59:08 -- nvmf/common.sh@158 -- # true 00:14:27.641 12:59:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:27.641 12:59:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:27.641 12:59:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:27.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.641 12:59:08 -- nvmf/common.sh@161 -- # true 00:14:27.641 12:59:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:27.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.641 12:59:08 -- nvmf/common.sh@162 -- # true 00:14:27.641 12:59:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:27.641 12:59:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:27.641 12:59:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:27.641 12:59:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:27.641 12:59:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:27.641 12:59:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:27.900 12:59:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:27.900 12:59:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:27.900 12:59:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:27.900 12:59:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:27.900 12:59:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:27.900 12:59:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:27.900 12:59:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:27.900 12:59:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:27.900 12:59:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:27.900 12:59:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:27.900 12:59:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:27.900 12:59:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:27.900 12:59:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:27.900 12:59:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:27.900 12:59:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:27.900 12:59:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:27.900 12:59:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:27.900 12:59:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:27.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:14:27.900 00:14:27.900 --- 10.0.0.2 ping statistics --- 00:14:27.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.900 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:27.900 12:59:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:27.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:27.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:14:27.900 00:14:27.900 --- 10.0.0.3 ping statistics --- 00:14:27.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.900 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:27.900 12:59:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:27.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:27.900 00:14:27.900 --- 10.0.0.1 ping statistics --- 00:14:27.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.900 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:27.900 12:59:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.900 12:59:08 -- nvmf/common.sh@421 -- # return 0 00:14:27.900 12:59:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:27.900 12:59:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.900 12:59:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:27.900 12:59:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:27.900 12:59:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.900 12:59:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:27.900 12:59:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:27.900 12:59:08 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:27.900 12:59:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:27.900 12:59:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.900 12:59:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.900 12:59:08 -- nvmf/common.sh@469 -- # nvmfpid=83325 00:14:27.900 12:59:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:27.900 12:59:08 -- nvmf/common.sh@470 -- # waitforlisten 83325 00:14:27.900 12:59:08 -- common/autotest_common.sh@829 -- # '[' -z 83325 ']' 00:14:27.900 12:59:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.900 12:59:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.900 12:59:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.900 12:59:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.900 12:59:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.900 [2024-12-13 12:59:08.631314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:27.900 [2024-12-13 12:59:08.631413] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.159 [2024-12-13 12:59:08.771789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.159 [2024-12-13 12:59:08.837959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.159 [2024-12-13 12:59:08.838100] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.159 [2024-12-13 12:59:08.838112] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.159 [2024-12-13 12:59:08.838120] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.159 [2024-12-13 12:59:08.838142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.095 12:59:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.096 12:59:09 -- common/autotest_common.sh@862 -- # return 0 00:14:29.096 12:59:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:29.096 12:59:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.096 12:59:09 -- common/autotest_common.sh@10 -- # set +x 00:14:29.096 12:59:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.096 12:59:09 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:29.096 [2024-12-13 12:59:09.863120] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.354 12:59:09 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:29.354 12:59:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:29.354 12:59:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.354 12:59:09 -- common/autotest_common.sh@10 -- # set +x 00:14:29.354 ************************************ 00:14:29.354 START TEST lvs_grow_clean 00:14:29.354 ************************************ 00:14:29.354 12:59:09 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:29.354 12:59:09 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:29.354 12:59:09 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:29.354 12:59:09 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:29.354 12:59:09 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:29.354 12:59:09 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:29.354 12:59:09 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:29.354 12:59:09 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:29.355 12:59:09 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:29.355 12:59:09 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:29.613 12:59:10 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:29.613 12:59:10 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:29.613 12:59:10 -- target/nvmf_lvs_grow.sh@28 -- # lvs=0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:29.613 12:59:10 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:29.613 12:59:10 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:29.871 12:59:10 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:29.871 12:59:10 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:29.871 12:59:10 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 lvol 150 00:14:30.130 12:59:10 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b1740bc-ca19-4157-8d9c-b3a54e34bbae 00:14:30.130 12:59:10 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:30.130 12:59:10 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:30.388 [2024-12-13 12:59:11.071682] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:30.388 [2024-12-13 12:59:11.071800] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:30.388 true 00:14:30.388 12:59:11 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:30.388 12:59:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:30.646 12:59:11 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:30.646 12:59:11 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:30.904 12:59:11 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b1740bc-ca19-4157-8d9c-b3a54e34bbae 00:14:31.162 12:59:11 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:31.419 [2024-12-13 12:59:12.060310] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.419 12:59:12 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.678 12:59:12 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:31.678 12:59:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83492 00:14:31.678 12:59:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.678 12:59:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83492 /var/tmp/bdevperf.sock 00:14:31.678 12:59:12 -- common/autotest_common.sh@829 -- # '[' -z 83492 ']' 00:14:31.678 12:59:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.678 12:59:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.678 12:59:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.678 12:59:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.678 12:59:12 -- common/autotest_common.sh@10 -- # set +x 00:14:31.678 [2024-12-13 12:59:12.385655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:31.678 [2024-12-13 12:59:12.385752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83492 ] 00:14:31.936 [2024-12-13 12:59:12.518999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.936 [2024-12-13 12:59:12.576867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.870 12:59:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.870 12:59:13 -- common/autotest_common.sh@862 -- # return 0 00:14:32.870 12:59:13 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:33.128 Nvme0n1 00:14:33.128 12:59:13 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:33.387 [ 00:14:33.387 { 00:14:33.387 "aliases": [ 00:14:33.387 "6b1740bc-ca19-4157-8d9c-b3a54e34bbae" 00:14:33.387 ], 00:14:33.387 "assigned_rate_limits": { 00:14:33.387 "r_mbytes_per_sec": 0, 00:14:33.387 "rw_ios_per_sec": 0, 00:14:33.387 "rw_mbytes_per_sec": 0, 00:14:33.387 "w_mbytes_per_sec": 0 00:14:33.387 }, 00:14:33.387 "block_size": 4096, 00:14:33.387 "claimed": false, 00:14:33.387 "driver_specific": { 00:14:33.387 "mp_policy": "active_passive", 00:14:33.387 "nvme": [ 00:14:33.387 { 00:14:33.387 "ctrlr_data": { 00:14:33.387 "ana_reporting": false, 00:14:33.387 "cntlid": 1, 00:14:33.387 "firmware_revision": "24.01.1", 00:14:33.387 "model_number": "SPDK bdev Controller", 00:14:33.387 "multi_ctrlr": true, 00:14:33.387 "oacs": { 00:14:33.387 "firmware": 0, 00:14:33.387 "format": 0, 00:14:33.387 "ns_manage": 0, 00:14:33.387 "security": 0 00:14:33.387 }, 00:14:33.387 "serial_number": "SPDK0", 00:14:33.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:33.387 "vendor_id": "0x8086" 00:14:33.387 }, 00:14:33.387 "ns_data": { 00:14:33.387 "can_share": true, 00:14:33.387 "id": 1 00:14:33.387 }, 00:14:33.387 "trid": { 00:14:33.387 "adrfam": "IPv4", 00:14:33.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:33.387 "traddr": "10.0.0.2", 00:14:33.387 "trsvcid": "4420", 00:14:33.387 "trtype": "TCP" 00:14:33.387 }, 00:14:33.387 "vs": { 00:14:33.387 "nvme_version": "1.3" 00:14:33.387 } 00:14:33.387 } 00:14:33.387 ] 00:14:33.387 }, 00:14:33.387 "name": "Nvme0n1", 00:14:33.387 "num_blocks": 38912, 00:14:33.387 "product_name": "NVMe disk", 00:14:33.387 "supported_io_types": { 00:14:33.387 "abort": true, 00:14:33.388 "compare": true, 00:14:33.388 "compare_and_write": true, 00:14:33.388 "flush": true, 00:14:33.388 "nvme_admin": true, 00:14:33.388 "nvme_io": true, 00:14:33.388 "read": true, 00:14:33.388 "reset": true, 00:14:33.388 "unmap": true, 00:14:33.388 "write": true, 00:14:33.388 "write_zeroes": true 00:14:33.388 }, 00:14:33.388 "uuid": "6b1740bc-ca19-4157-8d9c-b3a54e34bbae", 00:14:33.388 "zoned": false 00:14:33.388 } 00:14:33.388 ] 00:14:33.388 12:59:13 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83542 00:14:33.388 12:59:13 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.388 12:59:13 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:33.388 Running I/O for 10 seconds... 00:14:34.324 Latency(us) 00:14:34.324 [2024-12-13T12:59:15.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.324 [2024-12-13T12:59:15.100Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.324 Nvme0n1 : 1.00 7359.00 28.75 0.00 0.00 0.00 0.00 0.00 00:14:34.324 [2024-12-13T12:59:15.100Z] =================================================================================================================== 00:14:34.324 [2024-12-13T12:59:15.100Z] Total : 7359.00 28.75 0.00 0.00 0.00 0.00 0.00 00:14:34.324 00:14:35.259 12:59:15 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:35.259 [2024-12-13T12:59:16.035Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.259 Nvme0n1 : 2.00 7330.00 28.63 0.00 0.00 0.00 0.00 0.00 00:14:35.259 [2024-12-13T12:59:16.035Z] =================================================================================================================== 00:14:35.259 [2024-12-13T12:59:16.035Z] Total : 7330.00 28.63 0.00 0.00 0.00 0.00 0.00 00:14:35.259 00:14:35.518 true 00:14:35.518 12:59:16 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:35.518 12:59:16 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:35.777 12:59:16 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:35.777 12:59:16 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:35.777 12:59:16 -- target/nvmf_lvs_grow.sh@65 -- # wait 83542 00:14:36.344 [2024-12-13T12:59:17.120Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.344 Nvme0n1 : 3.00 7316.33 28.58 0.00 0.00 0.00 0.00 0.00 00:14:36.344 [2024-12-13T12:59:17.120Z] =================================================================================================================== 00:14:36.344 [2024-12-13T12:59:17.120Z] Total : 7316.33 28.58 0.00 0.00 0.00 0.00 0.00 00:14:36.344 00:14:37.279 [2024-12-13T12:59:18.055Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.279 Nvme0n1 : 4.00 7278.25 28.43 0.00 0.00 0.00 0.00 0.00 00:14:37.279 [2024-12-13T12:59:18.055Z] =================================================================================================================== 00:14:37.279 [2024-12-13T12:59:18.055Z] Total : 7278.25 28.43 0.00 0.00 0.00 0.00 0.00 00:14:37.279 00:14:38.676 [2024-12-13T12:59:19.452Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.676 Nvme0n1 : 5.00 7282.20 28.45 0.00 0.00 0.00 0.00 0.00 00:14:38.676 [2024-12-13T12:59:19.452Z] =================================================================================================================== 00:14:38.676 [2024-12-13T12:59:19.452Z] Total : 7282.20 28.45 0.00 0.00 0.00 0.00 0.00 00:14:38.676 00:14:39.260 [2024-12-13T12:59:20.036Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.260 Nvme0n1 : 6.00 7264.33 28.38 0.00 0.00 0.00 0.00 0.00 00:14:39.260 [2024-12-13T12:59:20.036Z] =================================================================================================================== 00:14:39.260 [2024-12-13T12:59:20.036Z] Total : 7264.33 28.38 0.00 0.00 0.00 0.00 0.00 00:14:39.260 00:14:40.636 [2024-12-13T12:59:21.412Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.636 Nvme0n1 : 7.00 7279.00 28.43 0.00 0.00 0.00 0.00 0.00 00:14:40.636 [2024-12-13T12:59:21.412Z] =================================================================================================================== 00:14:40.636 [2024-12-13T12:59:21.412Z] Total : 7279.00 28.43 0.00 0.00 0.00 0.00 0.00 00:14:40.636 00:14:41.573 [2024-12-13T12:59:22.349Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.573 Nvme0n1 : 8.00 7265.25 28.38 0.00 0.00 0.00 0.00 0.00 00:14:41.573 [2024-12-13T12:59:22.349Z] =================================================================================================================== 00:14:41.573 [2024-12-13T12:59:22.349Z] Total : 7265.25 28.38 0.00 0.00 0.00 0.00 0.00 00:14:41.573 00:14:42.508 [2024-12-13T12:59:23.284Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.508 Nvme0n1 : 9.00 7256.33 28.35 0.00 0.00 0.00 0.00 0.00 00:14:42.508 [2024-12-13T12:59:23.284Z] =================================================================================================================== 00:14:42.508 [2024-12-13T12:59:23.284Z] Total : 7256.33 28.35 0.00 0.00 0.00 0.00 0.00 00:14:42.508 00:14:43.445 [2024-12-13T12:59:24.221Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.445 Nvme0n1 : 10.00 7240.40 28.28 0.00 0.00 0.00 0.00 0.00 00:14:43.445 [2024-12-13T12:59:24.221Z] =================================================================================================================== 00:14:43.445 [2024-12-13T12:59:24.221Z] Total : 7240.40 28.28 0.00 0.00 0.00 0.00 0.00 00:14:43.445 00:14:43.445 00:14:43.445 Latency(us) 00:14:43.445 [2024-12-13T12:59:24.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.445 [2024-12-13T12:59:24.221Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.445 Nvme0n1 : 10.01 7242.82 28.29 0.00 0.00 17667.44 7983.48 35746.91 00:14:43.445 [2024-12-13T12:59:24.221Z] =================================================================================================================== 00:14:43.445 [2024-12-13T12:59:24.221Z] Total : 7242.82 28.29 0.00 0.00 17667.44 7983.48 35746.91 00:14:43.445 0 00:14:43.445 12:59:24 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83492 00:14:43.445 12:59:24 -- common/autotest_common.sh@936 -- # '[' -z 83492 ']' 00:14:43.445 12:59:24 -- common/autotest_common.sh@940 -- # kill -0 83492 00:14:43.445 12:59:24 -- common/autotest_common.sh@941 -- # uname 00:14:43.445 12:59:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:43.445 12:59:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83492 00:14:43.445 killing process with pid 83492 00:14:43.445 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.445 00:14:43.445 Latency(us) 00:14:43.445 [2024-12-13T12:59:24.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.445 [2024-12-13T12:59:24.221Z] =================================================================================================================== 00:14:43.445 [2024-12-13T12:59:24.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.445 12:59:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:43.445 12:59:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:43.445 12:59:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83492' 00:14:43.445 12:59:24 -- common/autotest_common.sh@955 -- # kill 83492 00:14:43.445 12:59:24 -- common/autotest_common.sh@960 -- # wait 83492 00:14:43.704 12:59:24 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:43.963 12:59:24 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:43.963 12:59:24 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:44.221 12:59:24 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:44.221 12:59:24 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:44.221 12:59:24 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:44.478 [2024-12-13 12:59:25.023503] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:44.478 12:59:25 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:44.478 12:59:25 -- common/autotest_common.sh@650 -- # local es=0 00:14:44.478 12:59:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:44.478 12:59:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.478 12:59:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.478 12:59:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.478 12:59:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.478 12:59:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.479 12:59:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.479 12:59:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.479 12:59:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:44.479 12:59:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:44.479 2024/12/13 12:59:25 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0d8ed8d3-2087-42e4-8369-ce5e727e1b16], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:44.479 request: 00:14:44.479 { 00:14:44.479 "method": "bdev_lvol_get_lvstores", 00:14:44.479 "params": { 00:14:44.479 "uuid": "0d8ed8d3-2087-42e4-8369-ce5e727e1b16" 00:14:44.479 } 00:14:44.479 } 00:14:44.479 Got JSON-RPC error response 00:14:44.479 GoRPCClient: error on JSON-RPC call 00:14:44.736 12:59:25 -- common/autotest_common.sh@653 -- # es=1 00:14:44.736 12:59:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:44.736 12:59:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:44.736 12:59:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:44.736 12:59:25 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:44.736 aio_bdev 00:14:44.736 12:59:25 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6b1740bc-ca19-4157-8d9c-b3a54e34bbae 00:14:44.736 12:59:25 -- common/autotest_common.sh@897 -- # local bdev_name=6b1740bc-ca19-4157-8d9c-b3a54e34bbae 00:14:44.736 12:59:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:44.736 12:59:25 -- common/autotest_common.sh@899 -- # local i 00:14:44.736 12:59:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:44.736 12:59:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:44.736 12:59:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:44.994 12:59:25 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b1740bc-ca19-4157-8d9c-b3a54e34bbae -t 2000 00:14:45.252 [ 00:14:45.252 { 00:14:45.252 "aliases": [ 00:14:45.252 "lvs/lvol" 00:14:45.252 ], 00:14:45.252 "assigned_rate_limits": { 00:14:45.252 "r_mbytes_per_sec": 0, 00:14:45.252 "rw_ios_per_sec": 0, 00:14:45.252 "rw_mbytes_per_sec": 0, 00:14:45.252 "w_mbytes_per_sec": 0 00:14:45.252 }, 00:14:45.252 "block_size": 4096, 00:14:45.252 "claimed": false, 00:14:45.252 "driver_specific": { 00:14:45.252 "lvol": { 00:14:45.252 "base_bdev": "aio_bdev", 00:14:45.252 "clone": false, 00:14:45.252 "esnap_clone": false, 00:14:45.252 "lvol_store_uuid": "0d8ed8d3-2087-42e4-8369-ce5e727e1b16", 00:14:45.252 "snapshot": false, 00:14:45.252 "thin_provision": false 00:14:45.252 } 00:14:45.252 }, 00:14:45.252 "name": "6b1740bc-ca19-4157-8d9c-b3a54e34bbae", 00:14:45.252 "num_blocks": 38912, 00:14:45.252 "product_name": "Logical Volume", 00:14:45.252 "supported_io_types": { 00:14:45.252 "abort": false, 00:14:45.252 "compare": false, 00:14:45.252 "compare_and_write": false, 00:14:45.252 "flush": false, 00:14:45.252 "nvme_admin": false, 00:14:45.252 "nvme_io": false, 00:14:45.252 "read": true, 00:14:45.252 "reset": true, 00:14:45.252 "unmap": true, 00:14:45.252 "write": true, 00:14:45.252 "write_zeroes": true 00:14:45.252 }, 00:14:45.252 "uuid": "6b1740bc-ca19-4157-8d9c-b3a54e34bbae", 00:14:45.252 "zoned": false 00:14:45.252 } 00:14:45.252 ] 00:14:45.252 12:59:25 -- common/autotest_common.sh@905 -- # return 0 00:14:45.252 12:59:25 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:45.252 12:59:25 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:45.511 12:59:26 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:45.511 12:59:26 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:45.511 12:59:26 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:45.769 12:59:26 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:45.769 12:59:26 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6b1740bc-ca19-4157-8d9c-b3a54e34bbae 00:14:46.027 12:59:26 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d8ed8d3-2087-42e4-8369-ce5e727e1b16 00:14:46.027 12:59:26 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:46.285 12:59:27 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:46.860 00:14:46.860 real 0m17.494s 00:14:46.860 user 0m16.985s 00:14:46.860 sys 0m1.991s 00:14:46.860 12:59:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:46.860 12:59:27 -- common/autotest_common.sh@10 -- # set +x 00:14:46.860 ************************************ 00:14:46.860 END TEST lvs_grow_clean 00:14:46.860 ************************************ 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:46.860 12:59:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:46.860 12:59:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.860 12:59:27 -- common/autotest_common.sh@10 -- # set +x 00:14:46.860 ************************************ 00:14:46.860 START TEST lvs_grow_dirty 00:14:46.860 ************************************ 00:14:46.860 12:59:27 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:46.860 12:59:27 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:47.122 12:59:27 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:47.122 12:59:27 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:47.380 12:59:27 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7aa7e420-a055-4151-9768-a54b860b3ccd 00:14:47.380 12:59:27 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:47.380 12:59:27 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:14:47.638 12:59:28 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:47.638 12:59:28 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:47.638 12:59:28 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7aa7e420-a055-4151-9768-a54b860b3ccd lvol 150 00:14:47.897 12:59:28 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ef2d5092-4d46-4bc2-847e-2b363b807820 00:14:47.897 12:59:28 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:47.897 12:59:28 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:48.156 [2024-12-13 12:59:28.718643] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:48.156 [2024-12-13 12:59:28.718714] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:48.156 true 00:14:48.156 12:59:28 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:14:48.156 12:59:28 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:48.414 12:59:28 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:48.414 12:59:28 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:48.673 12:59:29 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ef2d5092-4d46-4bc2-847e-2b363b807820 00:14:48.673 12:59:29 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:48.932 12:59:29 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.191 12:59:29 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83922 00:14:49.191 12:59:29 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:49.191 12:59:29 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:49.191 12:59:29 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83922 /var/tmp/bdevperf.sock 00:14:49.191 12:59:29 -- common/autotest_common.sh@829 -- # '[' -z 83922 ']' 00:14:49.191 12:59:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.191 12:59:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.191 12:59:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.191 12:59:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.191 12:59:29 -- common/autotest_common.sh@10 -- # set +x 00:14:49.450 [2024-12-13 12:59:29.970022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:49.450 [2024-12-13 12:59:29.970095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83922 ] 00:14:49.450 [2024-12-13 12:59:30.096992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.450 [2024-12-13 12:59:30.154823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.385 12:59:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.385 12:59:30 -- common/autotest_common.sh@862 -- # return 0 00:14:50.386 12:59:30 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:50.644 Nvme0n1 00:14:50.644 12:59:31 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:50.903 [ 00:14:50.903 { 00:14:50.903 "aliases": [ 00:14:50.903 "ef2d5092-4d46-4bc2-847e-2b363b807820" 00:14:50.903 ], 00:14:50.903 "assigned_rate_limits": { 00:14:50.903 "r_mbytes_per_sec": 0, 00:14:50.903 "rw_ios_per_sec": 0, 00:14:50.903 "rw_mbytes_per_sec": 0, 00:14:50.903 "w_mbytes_per_sec": 0 00:14:50.903 }, 00:14:50.903 "block_size": 4096, 00:14:50.903 "claimed": false, 00:14:50.903 "driver_specific": { 00:14:50.903 "mp_policy": "active_passive", 00:14:50.903 "nvme": [ 00:14:50.903 { 00:14:50.903 "ctrlr_data": { 00:14:50.903 "ana_reporting": false, 00:14:50.903 "cntlid": 1, 00:14:50.903 "firmware_revision": "24.01.1", 00:14:50.903 "model_number": "SPDK bdev Controller", 00:14:50.903 "multi_ctrlr": true, 00:14:50.903 "oacs": { 00:14:50.903 "firmware": 0, 00:14:50.903 "format": 0, 00:14:50.903 "ns_manage": 0, 00:14:50.903 "security": 0 00:14:50.903 }, 00:14:50.903 "serial_number": "SPDK0", 00:14:50.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:50.903 "vendor_id": "0x8086" 00:14:50.903 }, 00:14:50.903 "ns_data": { 00:14:50.903 "can_share": true, 00:14:50.903 "id": 1 00:14:50.903 }, 00:14:50.903 "trid": { 00:14:50.903 "adrfam": "IPv4", 00:14:50.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:50.903 "traddr": "10.0.0.2", 00:14:50.903 "trsvcid": "4420", 00:14:50.903 "trtype": "TCP" 00:14:50.903 }, 00:14:50.903 "vs": { 00:14:50.903 "nvme_version": "1.3" 00:14:50.903 } 00:14:50.903 } 00:14:50.903 ] 00:14:50.903 }, 00:14:50.903 "name": "Nvme0n1", 00:14:50.903 "num_blocks": 38912, 00:14:50.903 "product_name": "NVMe disk", 00:14:50.903 "supported_io_types": { 00:14:50.903 "abort": true, 00:14:50.903 "compare": true, 00:14:50.903 "compare_and_write": true, 00:14:50.903 "flush": true, 00:14:50.903 "nvme_admin": true, 00:14:50.903 "nvme_io": true, 00:14:50.903 "read": true, 00:14:50.903 "reset": true, 00:14:50.903 "unmap": true, 00:14:50.903 "write": true, 00:14:50.903 "write_zeroes": true 00:14:50.903 }, 00:14:50.903 "uuid": "ef2d5092-4d46-4bc2-847e-2b363b807820", 00:14:50.903 "zoned": false 00:14:50.903 } 00:14:50.903 ] 00:14:50.903 12:59:31 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:50.903 12:59:31 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83975 00:14:50.903 12:59:31 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:50.903 Running I/O for 10 seconds... 00:14:51.840 Latency(us) 00:14:51.840 [2024-12-13T12:59:32.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.840 [2024-12-13T12:59:32.616Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.840 Nvme0n1 : 1.00 7593.00 29.66 0.00 0.00 0.00 0.00 0.00 00:14:51.840 [2024-12-13T12:59:32.616Z] =================================================================================================================== 00:14:51.840 [2024-12-13T12:59:32.616Z] Total : 7593.00 29.66 0.00 0.00 0.00 0.00 0.00 00:14:51.840 00:14:52.776 12:59:33 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:14:52.776 [2024-12-13T12:59:33.552Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.776 Nvme0n1 : 2.00 7546.00 29.48 0.00 0.00 0.00 0.00 0.00 00:14:52.776 [2024-12-13T12:59:33.552Z] =================================================================================================================== 00:14:52.777 [2024-12-13T12:59:33.553Z] Total : 7546.00 29.48 0.00 0.00 0.00 0.00 0.00 00:14:52.777 00:14:53.035 true 00:14:53.035 12:59:33 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:14:53.035 12:59:33 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:53.603 12:59:34 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:53.603 12:59:34 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:53.603 12:59:34 -- target/nvmf_lvs_grow.sh@65 -- # wait 83975 00:14:53.862 [2024-12-13T12:59:34.638Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.862 Nvme0n1 : 3.00 7236.67 28.27 0.00 0.00 0.00 0.00 0.00 00:14:53.862 [2024-12-13T12:59:34.638Z] =================================================================================================================== 00:14:53.862 [2024-12-13T12:59:34.638Z] Total : 7236.67 28.27 0.00 0.00 0.00 0.00 0.00 00:14:53.862 00:14:54.797 [2024-12-13T12:59:35.573Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.797 Nvme0n1 : 4.00 7087.25 27.68 0.00 0.00 0.00 0.00 0.00 00:14:54.797 [2024-12-13T12:59:35.573Z] =================================================================================================================== 00:14:54.797 [2024-12-13T12:59:35.573Z] Total : 7087.25 27.68 0.00 0.00 0.00 0.00 0.00 00:14:54.797 00:14:56.174 [2024-12-13T12:59:36.950Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.174 Nvme0n1 : 5.00 7153.20 27.94 0.00 0.00 0.00 0.00 0.00 00:14:56.174 [2024-12-13T12:59:36.951Z] =================================================================================================================== 00:14:56.175 [2024-12-13T12:59:36.951Z] Total : 7153.20 27.94 0.00 0.00 0.00 0.00 0.00 00:14:56.175 00:14:57.110 [2024-12-13T12:59:37.886Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.110 Nvme0n1 : 6.00 7192.50 28.10 0.00 0.00 0.00 0.00 0.00 00:14:57.110 [2024-12-13T12:59:37.886Z] =================================================================================================================== 00:14:57.110 [2024-12-13T12:59:37.886Z] Total : 7192.50 28.10 0.00 0.00 0.00 0.00 0.00 00:14:57.110 00:14:58.047 [2024-12-13T12:59:38.823Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.047 Nvme0n1 : 7.00 7213.43 28.18 0.00 0.00 0.00 0.00 0.00 00:14:58.047 [2024-12-13T12:59:38.823Z] =================================================================================================================== 00:14:58.047 [2024-12-13T12:59:38.823Z] Total : 7213.43 28.18 0.00 0.00 0.00 0.00 0.00 00:14:58.047 00:14:58.994 [2024-12-13T12:59:39.770Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.994 Nvme0n1 : 8.00 7137.62 27.88 0.00 0.00 0.00 0.00 0.00 00:14:58.994 [2024-12-13T12:59:39.770Z] =================================================================================================================== 00:14:58.994 [2024-12-13T12:59:39.770Z] Total : 7137.62 27.88 0.00 0.00 0.00 0.00 0.00 00:14:58.994 00:14:59.981 [2024-12-13T12:59:40.757Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.981 Nvme0n1 : 9.00 7125.67 27.83 0.00 0.00 0.00 0.00 0.00 00:14:59.981 [2024-12-13T12:59:40.757Z] =================================================================================================================== 00:14:59.981 [2024-12-13T12:59:40.757Z] Total : 7125.67 27.83 0.00 0.00 0.00 0.00 0.00 00:14:59.981 00:15:00.917 [2024-12-13T12:59:41.693Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.917 Nvme0n1 : 10.00 7103.90 27.75 0.00 0.00 0.00 0.00 0.00 00:15:00.917 [2024-12-13T12:59:41.693Z] =================================================================================================================== 00:15:00.917 [2024-12-13T12:59:41.693Z] Total : 7103.90 27.75 0.00 0.00 0.00 0.00 0.00 00:15:00.917 00:15:00.917 00:15:00.917 Latency(us) 00:15:00.917 [2024-12-13T12:59:41.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.917 [2024-12-13T12:59:41.693Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.917 Nvme0n1 : 10.02 7104.59 27.75 0.00 0.00 18004.01 5302.46 209715.20 00:15:00.917 [2024-12-13T12:59:41.693Z] =================================================================================================================== 00:15:00.917 [2024-12-13T12:59:41.693Z] Total : 7104.59 27.75 0.00 0.00 18004.01 5302.46 209715.20 00:15:00.917 0 00:15:00.917 12:59:41 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83922 00:15:00.917 12:59:41 -- common/autotest_common.sh@936 -- # '[' -z 83922 ']' 00:15:00.917 12:59:41 -- common/autotest_common.sh@940 -- # kill -0 83922 00:15:00.917 12:59:41 -- common/autotest_common.sh@941 -- # uname 00:15:00.917 12:59:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:00.917 12:59:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83922 00:15:00.917 12:59:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:00.917 12:59:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:00.917 12:59:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83922' 00:15:00.917 killing process with pid 83922 00:15:00.917 12:59:41 -- common/autotest_common.sh@955 -- # kill 83922 00:15:00.917 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.917 00:15:00.917 Latency(us) 00:15:00.917 [2024-12-13T12:59:41.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.917 [2024-12-13T12:59:41.693Z] =================================================================================================================== 00:15:00.917 [2024-12-13T12:59:41.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.917 12:59:41 -- common/autotest_common.sh@960 -- # wait 83922 00:15:01.175 12:59:41 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:01.433 12:59:42 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:15:01.433 12:59:42 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:01.692 12:59:42 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:01.692 12:59:42 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:01.692 12:59:42 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83325 00:15:01.692 12:59:42 -- target/nvmf_lvs_grow.sh@74 -- # wait 83325 00:15:01.692 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83325 Killed "${NVMF_APP[@]}" "$@" 00:15:01.692 12:59:42 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:01.692 12:59:42 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:01.692 12:59:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:01.692 12:59:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.692 12:59:42 -- common/autotest_common.sh@10 -- # set +x 00:15:01.692 12:59:42 -- nvmf/common.sh@469 -- # nvmfpid=84126 00:15:01.692 12:59:42 -- nvmf/common.sh@470 -- # waitforlisten 84126 00:15:01.692 12:59:42 -- common/autotest_common.sh@829 -- # '[' -z 84126 ']' 00:15:01.692 12:59:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:01.692 12:59:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.692 12:59:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.692 12:59:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.692 12:59:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.692 12:59:42 -- common/autotest_common.sh@10 -- # set +x 00:15:01.692 [2024-12-13 12:59:42.455584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:01.693 [2024-12-13 12:59:42.455682] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.951 [2024-12-13 12:59:42.595884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.951 [2024-12-13 12:59:42.658571] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:01.951 [2024-12-13 12:59:42.658716] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.951 [2024-12-13 12:59:42.658729] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.951 [2024-12-13 12:59:42.658737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.951 [2024-12-13 12:59:42.658797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.887 12:59:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.887 12:59:43 -- common/autotest_common.sh@862 -- # return 0 00:15:02.887 12:59:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:02.887 12:59:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.887 12:59:43 -- common/autotest_common.sh@10 -- # set +x 00:15:02.887 12:59:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.887 12:59:43 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:02.887 [2024-12-13 12:59:43.628612] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:02.887 [2024-12-13 12:59:43.628991] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:02.887 [2024-12-13 12:59:43.629221] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:03.146 12:59:43 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:03.146 12:59:43 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev ef2d5092-4d46-4bc2-847e-2b363b807820 00:15:03.146 12:59:43 -- common/autotest_common.sh@897 -- # local bdev_name=ef2d5092-4d46-4bc2-847e-2b363b807820 00:15:03.146 12:59:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:03.146 12:59:43 -- common/autotest_common.sh@899 -- # local i 00:15:03.146 12:59:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:03.146 12:59:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:03.146 12:59:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:03.146 12:59:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef2d5092-4d46-4bc2-847e-2b363b807820 -t 2000 00:15:03.405 [ 00:15:03.405 { 00:15:03.405 "aliases": [ 00:15:03.405 "lvs/lvol" 00:15:03.405 ], 00:15:03.405 "assigned_rate_limits": { 00:15:03.405 "r_mbytes_per_sec": 0, 00:15:03.405 "rw_ios_per_sec": 0, 00:15:03.405 "rw_mbytes_per_sec": 0, 00:15:03.405 "w_mbytes_per_sec": 0 00:15:03.405 }, 00:15:03.405 "block_size": 4096, 00:15:03.405 "claimed": false, 00:15:03.405 "driver_specific": { 00:15:03.405 "lvol": { 00:15:03.405 "base_bdev": "aio_bdev", 00:15:03.405 "clone": false, 00:15:03.405 "esnap_clone": false, 00:15:03.405 "lvol_store_uuid": "7aa7e420-a055-4151-9768-a54b860b3ccd", 00:15:03.405 "snapshot": false, 00:15:03.405 "thin_provision": false 00:15:03.405 } 00:15:03.405 }, 00:15:03.405 "name": "ef2d5092-4d46-4bc2-847e-2b363b807820", 00:15:03.405 "num_blocks": 38912, 00:15:03.405 "product_name": "Logical Volume", 00:15:03.405 "supported_io_types": { 00:15:03.405 "abort": false, 00:15:03.405 "compare": false, 00:15:03.405 "compare_and_write": false, 00:15:03.405 "flush": false, 00:15:03.405 "nvme_admin": false, 00:15:03.405 "nvme_io": false, 00:15:03.405 "read": true, 00:15:03.405 "reset": true, 00:15:03.405 "unmap": true, 00:15:03.405 "write": true, 00:15:03.405 "write_zeroes": true 00:15:03.405 }, 00:15:03.405 "uuid": "ef2d5092-4d46-4bc2-847e-2b363b807820", 00:15:03.405 "zoned": false 00:15:03.405 } 00:15:03.405 ] 00:15:03.405 12:59:44 -- common/autotest_common.sh@905 -- # return 0 00:15:03.405 12:59:44 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:15:03.405 12:59:44 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:03.663 12:59:44 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:03.663 12:59:44 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:03.663 12:59:44 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:15:03.921 12:59:44 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:03.921 12:59:44 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:04.180 [2024-12-13 12:59:44.905920] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:04.180 12:59:44 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:15:04.180 12:59:44 -- common/autotest_common.sh@650 -- # local es=0 00:15:04.180 12:59:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:15:04.180 12:59:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.180 12:59:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:04.180 12:59:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.180 12:59:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:04.180 12:59:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.180 12:59:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:04.180 12:59:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.180 12:59:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:04.180 12:59:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:15:04.439 2024/12/13 12:59:45 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:7aa7e420-a055-4151-9768-a54b860b3ccd], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:04.439 request: 00:15:04.439 { 00:15:04.439 "method": "bdev_lvol_get_lvstores", 00:15:04.439 "params": { 00:15:04.439 "uuid": "7aa7e420-a055-4151-9768-a54b860b3ccd" 00:15:04.439 } 00:15:04.439 } 00:15:04.439 Got JSON-RPC error response 00:15:04.439 GoRPCClient: error on JSON-RPC call 00:15:04.439 12:59:45 -- common/autotest_common.sh@653 -- # es=1 00:15:04.439 12:59:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:04.439 12:59:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:04.439 12:59:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:04.439 12:59:45 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:04.697 aio_bdev 00:15:04.697 12:59:45 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ef2d5092-4d46-4bc2-847e-2b363b807820 00:15:04.697 12:59:45 -- common/autotest_common.sh@897 -- # local bdev_name=ef2d5092-4d46-4bc2-847e-2b363b807820 00:15:04.697 12:59:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:04.697 12:59:45 -- common/autotest_common.sh@899 -- # local i 00:15:04.698 12:59:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:04.698 12:59:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:04.698 12:59:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:04.956 12:59:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef2d5092-4d46-4bc2-847e-2b363b807820 -t 2000 00:15:05.214 [ 00:15:05.214 { 00:15:05.214 "aliases": [ 00:15:05.214 "lvs/lvol" 00:15:05.214 ], 00:15:05.214 "assigned_rate_limits": { 00:15:05.214 "r_mbytes_per_sec": 0, 00:15:05.214 "rw_ios_per_sec": 0, 00:15:05.214 "rw_mbytes_per_sec": 0, 00:15:05.214 "w_mbytes_per_sec": 0 00:15:05.214 }, 00:15:05.214 "block_size": 4096, 00:15:05.214 "claimed": false, 00:15:05.214 "driver_specific": { 00:15:05.214 "lvol": { 00:15:05.214 "base_bdev": "aio_bdev", 00:15:05.214 "clone": false, 00:15:05.214 "esnap_clone": false, 00:15:05.214 "lvol_store_uuid": "7aa7e420-a055-4151-9768-a54b860b3ccd", 00:15:05.214 "snapshot": false, 00:15:05.214 "thin_provision": false 00:15:05.214 } 00:15:05.214 }, 00:15:05.214 "name": "ef2d5092-4d46-4bc2-847e-2b363b807820", 00:15:05.214 "num_blocks": 38912, 00:15:05.214 "product_name": "Logical Volume", 00:15:05.214 "supported_io_types": { 00:15:05.214 "abort": false, 00:15:05.214 "compare": false, 00:15:05.214 "compare_and_write": false, 00:15:05.214 "flush": false, 00:15:05.214 "nvme_admin": false, 00:15:05.214 "nvme_io": false, 00:15:05.214 "read": true, 00:15:05.214 "reset": true, 00:15:05.214 "unmap": true, 00:15:05.214 "write": true, 00:15:05.214 "write_zeroes": true 00:15:05.214 }, 00:15:05.214 "uuid": "ef2d5092-4d46-4bc2-847e-2b363b807820", 00:15:05.214 "zoned": false 00:15:05.214 } 00:15:05.214 ] 00:15:05.214 12:59:45 -- common/autotest_common.sh@905 -- # return 0 00:15:05.214 12:59:45 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:15:05.214 12:59:45 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:05.473 12:59:46 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:05.473 12:59:46 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:15:05.473 12:59:46 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:05.732 12:59:46 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:05.732 12:59:46 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ef2d5092-4d46-4bc2-847e-2b363b807820 00:15:05.991 12:59:46 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7aa7e420-a055-4151-9768-a54b860b3ccd 00:15:06.249 12:59:46 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:06.508 12:59:47 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:06.766 00:15:06.766 real 0m19.977s 00:15:06.766 user 0m38.809s 00:15:06.766 sys 0m9.881s 00:15:06.766 12:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:06.766 12:59:47 -- common/autotest_common.sh@10 -- # set +x 00:15:06.766 ************************************ 00:15:06.766 END TEST lvs_grow_dirty 00:15:06.766 ************************************ 00:15:06.766 12:59:47 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:06.766 12:59:47 -- common/autotest_common.sh@806 -- # type=--id 00:15:06.766 12:59:47 -- common/autotest_common.sh@807 -- # id=0 00:15:06.766 12:59:47 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:06.766 12:59:47 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:06.766 12:59:47 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:06.766 12:59:47 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:06.766 12:59:47 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:06.766 12:59:47 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:06.766 nvmf_trace.0 00:15:06.766 12:59:47 -- common/autotest_common.sh@821 -- # return 0 00:15:06.766 12:59:47 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:06.766 12:59:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:06.766 12:59:47 -- nvmf/common.sh@116 -- # sync 00:15:07.333 12:59:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:07.333 12:59:48 -- nvmf/common.sh@119 -- # set +e 00:15:07.333 12:59:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:07.333 12:59:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:07.333 rmmod nvme_tcp 00:15:07.592 rmmod nvme_fabrics 00:15:07.592 rmmod nvme_keyring 00:15:07.592 12:59:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:07.592 12:59:48 -- nvmf/common.sh@123 -- # set -e 00:15:07.592 12:59:48 -- nvmf/common.sh@124 -- # return 0 00:15:07.592 12:59:48 -- nvmf/common.sh@477 -- # '[' -n 84126 ']' 00:15:07.592 12:59:48 -- nvmf/common.sh@478 -- # killprocess 84126 00:15:07.592 12:59:48 -- common/autotest_common.sh@936 -- # '[' -z 84126 ']' 00:15:07.592 12:59:48 -- common/autotest_common.sh@940 -- # kill -0 84126 00:15:07.592 12:59:48 -- common/autotest_common.sh@941 -- # uname 00:15:07.592 12:59:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.592 12:59:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84126 00:15:07.592 12:59:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.592 12:59:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.592 killing process with pid 84126 00:15:07.592 12:59:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84126' 00:15:07.592 12:59:48 -- common/autotest_common.sh@955 -- # kill 84126 00:15:07.592 12:59:48 -- common/autotest_common.sh@960 -- # wait 84126 00:15:07.851 12:59:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:07.851 12:59:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:07.851 12:59:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:07.851 12:59:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.851 12:59:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:07.851 12:59:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.851 12:59:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.851 12:59:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.851 12:59:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:07.851 00:15:07.851 real 0m40.384s 00:15:07.851 user 1m2.367s 00:15:07.851 sys 0m13.069s 00:15:07.851 12:59:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:07.851 12:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:07.851 ************************************ 00:15:07.851 END TEST nvmf_lvs_grow 00:15:07.851 ************************************ 00:15:07.851 12:59:48 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.851 12:59:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.851 12:59:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.851 12:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:07.851 ************************************ 00:15:07.851 START TEST nvmf_bdev_io_wait 00:15:07.851 ************************************ 00:15:07.851 12:59:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.851 * Looking for test storage... 00:15:07.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:07.851 12:59:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:07.851 12:59:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:07.851 12:59:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:08.110 12:59:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:08.111 12:59:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:08.111 12:59:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:08.111 12:59:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:08.111 12:59:48 -- scripts/common.sh@335 -- # IFS=.-: 00:15:08.111 12:59:48 -- scripts/common.sh@335 -- # read -ra ver1 00:15:08.111 12:59:48 -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.111 12:59:48 -- scripts/common.sh@336 -- # read -ra ver2 00:15:08.111 12:59:48 -- scripts/common.sh@337 -- # local 'op=<' 00:15:08.111 12:59:48 -- scripts/common.sh@339 -- # ver1_l=2 00:15:08.111 12:59:48 -- scripts/common.sh@340 -- # ver2_l=1 00:15:08.111 12:59:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:08.111 12:59:48 -- scripts/common.sh@343 -- # case "$op" in 00:15:08.111 12:59:48 -- scripts/common.sh@344 -- # : 1 00:15:08.111 12:59:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:08.111 12:59:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.111 12:59:48 -- scripts/common.sh@364 -- # decimal 1 00:15:08.111 12:59:48 -- scripts/common.sh@352 -- # local d=1 00:15:08.111 12:59:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.111 12:59:48 -- scripts/common.sh@354 -- # echo 1 00:15:08.111 12:59:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:08.111 12:59:48 -- scripts/common.sh@365 -- # decimal 2 00:15:08.111 12:59:48 -- scripts/common.sh@352 -- # local d=2 00:15:08.111 12:59:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.111 12:59:48 -- scripts/common.sh@354 -- # echo 2 00:15:08.111 12:59:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:08.111 12:59:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:08.111 12:59:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:08.111 12:59:48 -- scripts/common.sh@367 -- # return 0 00:15:08.111 12:59:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.111 12:59:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.111 --rc genhtml_branch_coverage=1 00:15:08.111 --rc genhtml_function_coverage=1 00:15:08.111 --rc genhtml_legend=1 00:15:08.111 --rc geninfo_all_blocks=1 00:15:08.111 --rc geninfo_unexecuted_blocks=1 00:15:08.111 00:15:08.111 ' 00:15:08.111 12:59:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.111 --rc genhtml_branch_coverage=1 00:15:08.111 --rc genhtml_function_coverage=1 00:15:08.111 --rc genhtml_legend=1 00:15:08.111 --rc geninfo_all_blocks=1 00:15:08.111 --rc geninfo_unexecuted_blocks=1 00:15:08.111 00:15:08.111 ' 00:15:08.111 12:59:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.111 --rc genhtml_branch_coverage=1 00:15:08.111 --rc genhtml_function_coverage=1 00:15:08.111 --rc genhtml_legend=1 00:15:08.111 --rc geninfo_all_blocks=1 00:15:08.111 --rc geninfo_unexecuted_blocks=1 00:15:08.111 00:15:08.111 ' 00:15:08.111 12:59:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.111 --rc genhtml_branch_coverage=1 00:15:08.111 --rc genhtml_function_coverage=1 00:15:08.111 --rc genhtml_legend=1 00:15:08.111 --rc geninfo_all_blocks=1 00:15:08.111 --rc geninfo_unexecuted_blocks=1 00:15:08.111 00:15:08.111 ' 00:15:08.111 12:59:48 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.111 12:59:48 -- nvmf/common.sh@7 -- # uname -s 00:15:08.111 12:59:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.111 12:59:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.111 12:59:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.111 12:59:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.111 12:59:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.111 12:59:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.111 12:59:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.111 12:59:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.111 12:59:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.111 12:59:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.111 12:59:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:15:08.111 12:59:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:15:08.111 12:59:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.111 12:59:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.111 12:59:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.111 12:59:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.111 12:59:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.111 12:59:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.111 12:59:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.111 12:59:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.111 12:59:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.111 12:59:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.111 12:59:48 -- paths/export.sh@5 -- # export PATH 00:15:08.111 12:59:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.111 12:59:48 -- nvmf/common.sh@46 -- # : 0 00:15:08.111 12:59:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:08.111 12:59:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:08.111 12:59:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:08.111 12:59:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.111 12:59:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.111 12:59:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:08.111 12:59:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:08.111 12:59:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:08.111 12:59:48 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:08.111 12:59:48 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:08.111 12:59:48 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:08.111 12:59:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:08.111 12:59:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.111 12:59:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:08.111 12:59:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:08.111 12:59:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:08.111 12:59:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.111 12:59:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.111 12:59:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.111 12:59:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:08.111 12:59:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:08.111 12:59:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:08.111 12:59:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:08.111 12:59:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:08.111 12:59:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:08.111 12:59:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.111 12:59:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.111 12:59:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:08.111 12:59:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:08.111 12:59:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.111 12:59:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.111 12:59:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.111 12:59:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.111 12:59:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.111 12:59:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.111 12:59:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.111 12:59:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.111 12:59:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:08.111 12:59:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:08.111 Cannot find device "nvmf_tgt_br" 00:15:08.111 12:59:48 -- nvmf/common.sh@154 -- # true 00:15:08.111 12:59:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.112 Cannot find device "nvmf_tgt_br2" 00:15:08.112 12:59:48 -- nvmf/common.sh@155 -- # true 00:15:08.112 12:59:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:08.112 12:59:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:08.112 Cannot find device "nvmf_tgt_br" 00:15:08.112 12:59:48 -- nvmf/common.sh@157 -- # true 00:15:08.112 12:59:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:08.112 Cannot find device "nvmf_tgt_br2" 00:15:08.112 12:59:48 -- nvmf/common.sh@158 -- # true 00:15:08.112 12:59:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:08.112 12:59:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:08.112 12:59:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.112 12:59:48 -- nvmf/common.sh@161 -- # true 00:15:08.112 12:59:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.112 12:59:48 -- nvmf/common.sh@162 -- # true 00:15:08.112 12:59:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.112 12:59:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.112 12:59:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.112 12:59:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.112 12:59:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.112 12:59:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.112 12:59:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.112 12:59:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:08.112 12:59:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:08.112 12:59:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:08.112 12:59:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:08.371 12:59:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:08.371 12:59:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:08.371 12:59:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:08.371 12:59:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:08.371 12:59:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:08.371 12:59:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:08.371 12:59:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:08.371 12:59:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:08.371 12:59:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:08.371 12:59:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:08.371 12:59:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:08.371 12:59:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:08.371 12:59:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:08.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:08.371 00:15:08.371 --- 10.0.0.2 ping statistics --- 00:15:08.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.371 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:08.371 12:59:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:08.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:08.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:08.371 00:15:08.371 --- 10.0.0.3 ping statistics --- 00:15:08.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.371 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:08.371 12:59:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:08.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:08.371 00:15:08.371 --- 10.0.0.1 ping statistics --- 00:15:08.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.371 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:08.371 12:59:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.371 12:59:48 -- nvmf/common.sh@421 -- # return 0 00:15:08.371 12:59:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.371 12:59:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.371 12:59:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:08.371 12:59:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:08.371 12:59:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.371 12:59:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:08.371 12:59:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:08.371 12:59:49 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:08.371 12:59:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:08.371 12:59:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.371 12:59:49 -- common/autotest_common.sh@10 -- # set +x 00:15:08.371 12:59:49 -- nvmf/common.sh@469 -- # nvmfpid=84545 00:15:08.371 12:59:49 -- nvmf/common.sh@470 -- # waitforlisten 84545 00:15:08.371 12:59:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:08.371 12:59:49 -- common/autotest_common.sh@829 -- # '[' -z 84545 ']' 00:15:08.371 12:59:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.371 12:59:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.371 12:59:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.371 12:59:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.371 12:59:49 -- common/autotest_common.sh@10 -- # set +x 00:15:08.371 [2024-12-13 12:59:49.080568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:08.371 [2024-12-13 12:59:49.080676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.630 [2024-12-13 12:59:49.219137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.630 [2024-12-13 12:59:49.291050] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.630 [2024-12-13 12:59:49.291175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.630 [2024-12-13 12:59:49.291187] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.630 [2024-12-13 12:59:49.291203] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.630 [2024-12-13 12:59:49.291352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.630 [2024-12-13 12:59:49.291484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.630 [2024-12-13 12:59:49.292031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.630 [2024-12-13 12:59:49.292038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.566 12:59:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.566 12:59:50 -- common/autotest_common.sh@862 -- # return 0 00:15:09.566 12:59:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:09.566 12:59:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:09.566 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 12:59:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:09.566 12:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.566 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 12:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:09.566 12:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.566 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 12:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.566 12:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.566 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 [2024-12-13 12:59:50.146343] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.566 12:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:09.566 12:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.566 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 Malloc0 00:15:09.566 12:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.566 12:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.566 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 12:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.566 12:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.566 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 12:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.566 12:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.566 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 [2024-12-13 12:59:50.200801] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.566 12:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84598 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@30 -- # READ_PID=84600 00:15:09.566 12:59:50 -- nvmf/common.sh@520 -- # config=() 00:15:09.566 12:59:50 -- nvmf/common.sh@520 -- # local subsystem config 00:15:09.566 12:59:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:09.566 12:59:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:09.566 { 00:15:09.566 "params": { 00:15:09.566 "name": "Nvme$subsystem", 00:15:09.566 "trtype": "$TEST_TRANSPORT", 00:15:09.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.566 "adrfam": "ipv4", 00:15:09.566 "trsvcid": "$NVMF_PORT", 00:15:09.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.566 "hdgst": ${hdgst:-false}, 00:15:09.566 "ddgst": ${ddgst:-false} 00:15:09.566 }, 00:15:09.566 "method": "bdev_nvme_attach_controller" 00:15:09.566 } 00:15:09.566 EOF 00:15:09.566 )") 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:09.566 12:59:50 -- nvmf/common.sh@520 -- # config=() 00:15:09.566 12:59:50 -- nvmf/common.sh@520 -- # local subsystem config 00:15:09.566 12:59:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84602 00:15:09.566 12:59:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:09.566 { 00:15:09.566 "params": { 00:15:09.566 "name": "Nvme$subsystem", 00:15:09.566 "trtype": "$TEST_TRANSPORT", 00:15:09.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.566 "adrfam": "ipv4", 00:15:09.566 "trsvcid": "$NVMF_PORT", 00:15:09.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.566 "hdgst": ${hdgst:-false}, 00:15:09.566 "ddgst": ${ddgst:-false} 00:15:09.566 }, 00:15:09.566 "method": "bdev_nvme_attach_controller" 00:15:09.566 } 00:15:09.566 EOF 00:15:09.566 )") 00:15:09.566 12:59:50 -- nvmf/common.sh@542 -- # cat 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84604 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@35 -- # sync 00:15:09.566 12:59:50 -- nvmf/common.sh@542 -- # cat 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:09.566 12:59:50 -- nvmf/common.sh@544 -- # jq . 00:15:09.566 12:59:50 -- nvmf/common.sh@544 -- # jq . 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:09.566 12:59:50 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:09.566 12:59:50 -- nvmf/common.sh@545 -- # IFS=, 00:15:09.566 12:59:50 -- nvmf/common.sh@520 -- # config=() 00:15:09.566 12:59:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:09.566 "params": { 00:15:09.566 "name": "Nvme1", 00:15:09.566 "trtype": "tcp", 00:15:09.566 "traddr": "10.0.0.2", 00:15:09.566 "adrfam": "ipv4", 00:15:09.566 "trsvcid": "4420", 00:15:09.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.566 "hdgst": false, 00:15:09.566 "ddgst": false 00:15:09.566 }, 00:15:09.566 "method": "bdev_nvme_attach_controller" 00:15:09.566 }' 00:15:09.566 12:59:50 -- nvmf/common.sh@520 -- # local subsystem config 00:15:09.566 12:59:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:09.566 12:59:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:09.566 { 00:15:09.566 "params": { 00:15:09.566 "name": "Nvme$subsystem", 00:15:09.566 "trtype": "$TEST_TRANSPORT", 00:15:09.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.567 "adrfam": "ipv4", 00:15:09.567 "trsvcid": "$NVMF_PORT", 00:15:09.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.567 "hdgst": ${hdgst:-false}, 00:15:09.567 "ddgst": ${ddgst:-false} 00:15:09.567 }, 00:15:09.567 "method": "bdev_nvme_attach_controller" 00:15:09.567 } 00:15:09.567 EOF 00:15:09.567 )") 00:15:09.567 12:59:50 -- nvmf/common.sh@545 -- # IFS=, 00:15:09.567 12:59:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:09.567 "params": { 00:15:09.567 "name": "Nvme1", 00:15:09.567 "trtype": "tcp", 00:15:09.567 "traddr": "10.0.0.2", 00:15:09.567 "adrfam": "ipv4", 00:15:09.567 "trsvcid": "4420", 00:15:09.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.567 "hdgst": false, 00:15:09.567 "ddgst": false 00:15:09.567 }, 00:15:09.567 "method": "bdev_nvme_attach_controller" 00:15:09.567 }' 00:15:09.567 12:59:50 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:09.567 12:59:50 -- nvmf/common.sh@520 -- # config=() 00:15:09.567 12:59:50 -- nvmf/common.sh@520 -- # local subsystem config 00:15:09.567 12:59:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:09.567 12:59:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:09.567 { 00:15:09.567 "params": { 00:15:09.567 "name": "Nvme$subsystem", 00:15:09.567 "trtype": "$TEST_TRANSPORT", 00:15:09.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.567 "adrfam": "ipv4", 00:15:09.567 "trsvcid": "$NVMF_PORT", 00:15:09.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.567 "hdgst": ${hdgst:-false}, 00:15:09.567 "ddgst": ${ddgst:-false} 00:15:09.567 }, 00:15:09.567 "method": "bdev_nvme_attach_controller" 00:15:09.567 } 00:15:09.567 EOF 00:15:09.567 )") 00:15:09.567 12:59:50 -- nvmf/common.sh@542 -- # cat 00:15:09.567 12:59:50 -- nvmf/common.sh@542 -- # cat 00:15:09.567 12:59:50 -- nvmf/common.sh@544 -- # jq . 00:15:09.567 12:59:50 -- nvmf/common.sh@544 -- # jq . 00:15:09.567 12:59:50 -- nvmf/common.sh@545 -- # IFS=, 00:15:09.567 12:59:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:09.567 "params": { 00:15:09.567 "name": "Nvme1", 00:15:09.567 "trtype": "tcp", 00:15:09.567 "traddr": "10.0.0.2", 00:15:09.567 "adrfam": "ipv4", 00:15:09.567 "trsvcid": "4420", 00:15:09.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.567 "hdgst": false, 00:15:09.567 "ddgst": false 00:15:09.567 }, 00:15:09.567 "method": "bdev_nvme_attach_controller" 00:15:09.567 }' 00:15:09.567 12:59:50 -- nvmf/common.sh@545 -- # IFS=, 00:15:09.567 12:59:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:09.567 "params": { 00:15:09.567 "name": "Nvme1", 00:15:09.567 "trtype": "tcp", 00:15:09.567 "traddr": "10.0.0.2", 00:15:09.567 "adrfam": "ipv4", 00:15:09.567 "trsvcid": "4420", 00:15:09.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.567 "hdgst": false, 00:15:09.567 "ddgst": false 00:15:09.567 }, 00:15:09.567 "method": "bdev_nvme_attach_controller" 00:15:09.567 }' 00:15:09.567 [2024-12-13 12:59:50.268062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:09.567 [2024-12-13 12:59:50.268147] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:09.567 [2024-12-13 12:59:50.275631] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:09.567 [2024-12-13 12:59:50.275721] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:09.567 [2024-12-13 12:59:50.293252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:09.567 [2024-12-13 12:59:50.293884] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:09.567 [2024-12-13 12:59:50.294364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:09.567 [2024-12-13 12:59:50.294439] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:09.567 12:59:50 -- target/bdev_io_wait.sh@37 -- # wait 84598 00:15:09.825 [2024-12-13 12:59:50.484655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.825 [2024-12-13 12:59:50.552580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:09.825 [2024-12-13 12:59:50.559024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.083 [2024-12-13 12:59:50.622479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:10.083 [2024-12-13 12:59:50.630549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.083 [2024-12-13 12:59:50.698124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:10.083 [2024-12-13 12:59:50.708575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.083 Running I/O for 1 seconds... 00:15:10.083 Running I/O for 1 seconds... 00:15:10.083 [2024-12-13 12:59:50.775141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:10.083 Running I/O for 1 seconds... 00:15:10.342 Running I/O for 1 seconds... 00:15:11.277 00:15:11.277 Latency(us) 00:15:11.277 [2024-12-13T12:59:52.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.277 [2024-12-13T12:59:52.053Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:11.277 Nvme1n1 : 1.02 6988.49 27.30 0.00 0.00 18092.30 8638.84 28955.00 00:15:11.277 [2024-12-13T12:59:52.053Z] =================================================================================================================== 00:15:11.277 [2024-12-13T12:59:52.053Z] Total : 6988.49 27.30 0.00 0.00 18092.30 8638.84 28955.00 00:15:11.277 00:15:11.277 Latency(us) 00:15:11.277 [2024-12-13T12:59:52.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.277 [2024-12-13T12:59:52.053Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:11.277 Nvme1n1 : 1.01 9167.34 35.81 0.00 0.00 13901.07 7923.90 25261.15 00:15:11.277 [2024-12-13T12:59:52.053Z] =================================================================================================================== 00:15:11.277 [2024-12-13T12:59:52.053Z] Total : 9167.34 35.81 0.00 0.00 13901.07 7923.90 25261.15 00:15:11.277 00:15:11.277 Latency(us) 00:15:11.277 [2024-12-13T12:59:52.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.277 [2024-12-13T12:59:52.053Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:11.277 Nvme1n1 : 1.00 204372.35 798.33 0.00 0.00 623.56 249.48 1176.67 00:15:11.277 [2024-12-13T12:59:52.053Z] =================================================================================================================== 00:15:11.277 [2024-12-13T12:59:52.054Z] Total : 204372.35 798.33 0.00 0.00 623.56 249.48 1176.67 00:15:11.278 00:15:11.278 Latency(us) 00:15:11.278 [2024-12-13T12:59:52.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.278 [2024-12-13T12:59:52.054Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:11.278 Nvme1n1 : 1.01 7173.33 28.02 0.00 0.00 17776.43 7119.59 39559.91 00:15:11.278 [2024-12-13T12:59:52.054Z] =================================================================================================================== 00:15:11.278 [2024-12-13T12:59:52.054Z] Total : 7173.33 28.02 0.00 0.00 17776.43 7119.59 39559.91 00:15:11.536 12:59:52 -- target/bdev_io_wait.sh@38 -- # wait 84600 00:15:11.536 12:59:52 -- target/bdev_io_wait.sh@39 -- # wait 84602 00:15:11.536 12:59:52 -- target/bdev_io_wait.sh@40 -- # wait 84604 00:15:11.536 12:59:52 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.536 12:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.536 12:59:52 -- common/autotest_common.sh@10 -- # set +x 00:15:11.536 12:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.536 12:59:52 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:11.536 12:59:52 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:11.536 12:59:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:11.536 12:59:52 -- nvmf/common.sh@116 -- # sync 00:15:11.537 12:59:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:11.537 12:59:52 -- nvmf/common.sh@119 -- # set +e 00:15:11.537 12:59:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:11.537 12:59:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:11.537 rmmod nvme_tcp 00:15:11.537 rmmod nvme_fabrics 00:15:11.537 rmmod nvme_keyring 00:15:11.537 12:59:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:11.537 12:59:52 -- nvmf/common.sh@123 -- # set -e 00:15:11.537 12:59:52 -- nvmf/common.sh@124 -- # return 0 00:15:11.537 12:59:52 -- nvmf/common.sh@477 -- # '[' -n 84545 ']' 00:15:11.537 12:59:52 -- nvmf/common.sh@478 -- # killprocess 84545 00:15:11.537 12:59:52 -- common/autotest_common.sh@936 -- # '[' -z 84545 ']' 00:15:11.537 12:59:52 -- common/autotest_common.sh@940 -- # kill -0 84545 00:15:11.537 12:59:52 -- common/autotest_common.sh@941 -- # uname 00:15:11.537 12:59:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:11.537 12:59:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84545 00:15:11.795 12:59:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:11.795 12:59:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:11.795 killing process with pid 84545 00:15:11.795 12:59:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84545' 00:15:11.795 12:59:52 -- common/autotest_common.sh@955 -- # kill 84545 00:15:11.795 12:59:52 -- common/autotest_common.sh@960 -- # wait 84545 00:15:11.795 12:59:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:11.795 12:59:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:11.795 12:59:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:11.795 12:59:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.795 12:59:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:11.795 12:59:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.795 12:59:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.795 12:59:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.795 12:59:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:11.795 ************************************ 00:15:11.795 END TEST nvmf_bdev_io_wait 00:15:11.795 ************************************ 00:15:11.795 00:15:11.795 real 0m4.083s 00:15:11.795 user 0m17.897s 00:15:11.795 sys 0m1.948s 00:15:11.795 12:59:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:11.795 12:59:52 -- common/autotest_common.sh@10 -- # set +x 00:15:12.055 12:59:52 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:12.055 12:59:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:12.055 12:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:12.055 12:59:52 -- common/autotest_common.sh@10 -- # set +x 00:15:12.055 ************************************ 00:15:12.055 START TEST nvmf_queue_depth 00:15:12.055 ************************************ 00:15:12.055 12:59:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:12.055 * Looking for test storage... 00:15:12.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:12.055 12:59:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:12.055 12:59:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:12.055 12:59:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:12.055 12:59:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:12.055 12:59:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:12.055 12:59:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:12.055 12:59:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:12.055 12:59:52 -- scripts/common.sh@335 -- # IFS=.-: 00:15:12.055 12:59:52 -- scripts/common.sh@335 -- # read -ra ver1 00:15:12.055 12:59:52 -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.055 12:59:52 -- scripts/common.sh@336 -- # read -ra ver2 00:15:12.055 12:59:52 -- scripts/common.sh@337 -- # local 'op=<' 00:15:12.055 12:59:52 -- scripts/common.sh@339 -- # ver1_l=2 00:15:12.055 12:59:52 -- scripts/common.sh@340 -- # ver2_l=1 00:15:12.055 12:59:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:12.055 12:59:52 -- scripts/common.sh@343 -- # case "$op" in 00:15:12.055 12:59:52 -- scripts/common.sh@344 -- # : 1 00:15:12.055 12:59:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:12.055 12:59:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.055 12:59:52 -- scripts/common.sh@364 -- # decimal 1 00:15:12.055 12:59:52 -- scripts/common.sh@352 -- # local d=1 00:15:12.055 12:59:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.055 12:59:52 -- scripts/common.sh@354 -- # echo 1 00:15:12.055 12:59:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:12.055 12:59:52 -- scripts/common.sh@365 -- # decimal 2 00:15:12.055 12:59:52 -- scripts/common.sh@352 -- # local d=2 00:15:12.055 12:59:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.055 12:59:52 -- scripts/common.sh@354 -- # echo 2 00:15:12.055 12:59:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:12.055 12:59:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:12.055 12:59:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:12.055 12:59:52 -- scripts/common.sh@367 -- # return 0 00:15:12.055 12:59:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.055 12:59:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:12.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.055 --rc genhtml_branch_coverage=1 00:15:12.055 --rc genhtml_function_coverage=1 00:15:12.055 --rc genhtml_legend=1 00:15:12.055 --rc geninfo_all_blocks=1 00:15:12.055 --rc geninfo_unexecuted_blocks=1 00:15:12.055 00:15:12.055 ' 00:15:12.055 12:59:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:12.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.055 --rc genhtml_branch_coverage=1 00:15:12.055 --rc genhtml_function_coverage=1 00:15:12.055 --rc genhtml_legend=1 00:15:12.055 --rc geninfo_all_blocks=1 00:15:12.055 --rc geninfo_unexecuted_blocks=1 00:15:12.055 00:15:12.055 ' 00:15:12.055 12:59:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:12.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.055 --rc genhtml_branch_coverage=1 00:15:12.055 --rc genhtml_function_coverage=1 00:15:12.055 --rc genhtml_legend=1 00:15:12.055 --rc geninfo_all_blocks=1 00:15:12.055 --rc geninfo_unexecuted_blocks=1 00:15:12.055 00:15:12.055 ' 00:15:12.055 12:59:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:12.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.055 --rc genhtml_branch_coverage=1 00:15:12.055 --rc genhtml_function_coverage=1 00:15:12.055 --rc genhtml_legend=1 00:15:12.055 --rc geninfo_all_blocks=1 00:15:12.055 --rc geninfo_unexecuted_blocks=1 00:15:12.055 00:15:12.055 ' 00:15:12.055 12:59:52 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.055 12:59:52 -- nvmf/common.sh@7 -- # uname -s 00:15:12.055 12:59:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.055 12:59:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.055 12:59:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.055 12:59:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.055 12:59:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.055 12:59:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.055 12:59:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.055 12:59:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.055 12:59:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.055 12:59:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.055 12:59:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:15:12.055 12:59:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:15:12.055 12:59:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.055 12:59:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.055 12:59:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.055 12:59:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.055 12:59:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.055 12:59:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.055 12:59:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.055 12:59:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.055 12:59:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.055 12:59:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.055 12:59:52 -- paths/export.sh@5 -- # export PATH 00:15:12.055 12:59:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.055 12:59:52 -- nvmf/common.sh@46 -- # : 0 00:15:12.055 12:59:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:12.055 12:59:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:12.055 12:59:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:12.055 12:59:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.055 12:59:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.055 12:59:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:12.055 12:59:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:12.055 12:59:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:12.056 12:59:52 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:12.056 12:59:52 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:12.056 12:59:52 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:12.056 12:59:52 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:12.056 12:59:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:12.056 12:59:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.056 12:59:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:12.056 12:59:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:12.056 12:59:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:12.056 12:59:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.056 12:59:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.056 12:59:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.056 12:59:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:12.056 12:59:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:12.056 12:59:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:12.056 12:59:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:12.056 12:59:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:12.056 12:59:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:12.056 12:59:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.056 12:59:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.056 12:59:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:12.056 12:59:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:12.056 12:59:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.056 12:59:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.056 12:59:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.056 12:59:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.056 12:59:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.056 12:59:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.056 12:59:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.056 12:59:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.056 12:59:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:12.056 12:59:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:12.056 Cannot find device "nvmf_tgt_br" 00:15:12.056 12:59:52 -- nvmf/common.sh@154 -- # true 00:15:12.056 12:59:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.056 Cannot find device "nvmf_tgt_br2" 00:15:12.056 12:59:52 -- nvmf/common.sh@155 -- # true 00:15:12.056 12:59:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:12.056 12:59:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:12.056 Cannot find device "nvmf_tgt_br" 00:15:12.056 12:59:52 -- nvmf/common.sh@157 -- # true 00:15:12.056 12:59:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:12.315 Cannot find device "nvmf_tgt_br2" 00:15:12.315 12:59:52 -- nvmf/common.sh@158 -- # true 00:15:12.315 12:59:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:12.315 12:59:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:12.315 12:59:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.315 12:59:52 -- nvmf/common.sh@161 -- # true 00:15:12.315 12:59:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.315 12:59:52 -- nvmf/common.sh@162 -- # true 00:15:12.315 12:59:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.315 12:59:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.315 12:59:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.315 12:59:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.315 12:59:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.315 12:59:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.315 12:59:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.315 12:59:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:12.315 12:59:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:12.315 12:59:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:12.315 12:59:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:12.315 12:59:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:12.315 12:59:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:12.315 12:59:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.315 12:59:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.315 12:59:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.315 12:59:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:12.315 12:59:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:12.315 12:59:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.315 12:59:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.315 12:59:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.315 12:59:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.315 12:59:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.315 12:59:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:12.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:15:12.315 00:15:12.315 --- 10.0.0.2 ping statistics --- 00:15:12.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.315 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:12.315 12:59:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:12.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:12.315 00:15:12.315 --- 10.0.0.3 ping statistics --- 00:15:12.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.315 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:12.315 12:59:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:12.315 00:15:12.315 --- 10.0.0.1 ping statistics --- 00:15:12.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.315 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:12.315 12:59:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.315 12:59:53 -- nvmf/common.sh@421 -- # return 0 00:15:12.315 12:59:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:12.315 12:59:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.315 12:59:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:12.315 12:59:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:12.315 12:59:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.315 12:59:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:12.315 12:59:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:12.574 12:59:53 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:12.574 12:59:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:12.574 12:59:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.574 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:12.574 12:59:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:12.574 12:59:53 -- nvmf/common.sh@469 -- # nvmfpid=84845 00:15:12.574 12:59:53 -- nvmf/common.sh@470 -- # waitforlisten 84845 00:15:12.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.574 12:59:53 -- common/autotest_common.sh@829 -- # '[' -z 84845 ']' 00:15:12.574 12:59:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.574 12:59:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.574 12:59:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.574 12:59:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.574 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:12.574 [2024-12-13 12:59:53.142550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:12.574 [2024-12-13 12:59:53.142630] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.574 [2024-12-13 12:59:53.267066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.574 [2024-12-13 12:59:53.325536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:12.574 [2024-12-13 12:59:53.325675] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.574 [2024-12-13 12:59:53.325687] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.574 [2024-12-13 12:59:53.325694] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.574 [2024-12-13 12:59:53.325718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.511 12:59:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.511 12:59:54 -- common/autotest_common.sh@862 -- # return 0 00:15:13.511 12:59:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:13.511 12:59:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:13.511 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.511 12:59:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.511 12:59:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.511 12:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.511 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.511 [2024-12-13 12:59:54.219397] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.511 12:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.511 12:59:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:13.511 12:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.511 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.511 Malloc0 00:15:13.511 12:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.511 12:59:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:13.511 12:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.511 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.511 12:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.511 12:59:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.511 12:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.511 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.511 12:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.511 12:59:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.511 12:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.511 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.511 [2024-12-13 12:59:54.279467] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.511 12:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.511 12:59:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=84895 00:15:13.511 12:59:54 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:13.511 12:59:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.511 12:59:54 -- target/queue_depth.sh@33 -- # waitforlisten 84895 /var/tmp/bdevperf.sock 00:15:13.511 12:59:54 -- common/autotest_common.sh@829 -- # '[' -z 84895 ']' 00:15:13.511 12:59:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.770 12:59:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.770 12:59:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.770 12:59:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.770 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.770 [2024-12-13 12:59:54.339994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:13.770 [2024-12-13 12:59:54.340282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84895 ] 00:15:13.770 [2024-12-13 12:59:54.479617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.770 [2024-12-13 12:59:54.539012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.705 12:59:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.705 12:59:55 -- common/autotest_common.sh@862 -- # return 0 00:15:14.705 12:59:55 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:14.705 12:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.705 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:15:14.705 NVMe0n1 00:15:14.705 12:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.705 12:59:55 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.964 Running I/O for 10 seconds... 00:15:24.953 00:15:24.953 Latency(us) 00:15:24.953 [2024-12-13T13:00:05.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.953 [2024-12-13T13:00:05.729Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:24.953 Verification LBA range: start 0x0 length 0x4000 00:15:24.953 NVMe0n1 : 10.05 16688.56 65.19 0.00 0.00 61167.90 12749.73 50998.92 00:15:24.953 [2024-12-13T13:00:05.729Z] =================================================================================================================== 00:15:24.953 [2024-12-13T13:00:05.729Z] Total : 16688.56 65.19 0.00 0.00 61167.90 12749.73 50998.92 00:15:24.953 0 00:15:24.953 13:00:05 -- target/queue_depth.sh@39 -- # killprocess 84895 00:15:24.953 13:00:05 -- common/autotest_common.sh@936 -- # '[' -z 84895 ']' 00:15:24.953 13:00:05 -- common/autotest_common.sh@940 -- # kill -0 84895 00:15:24.954 13:00:05 -- common/autotest_common.sh@941 -- # uname 00:15:24.954 13:00:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:24.954 13:00:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84895 00:15:24.954 killing process with pid 84895 00:15:24.954 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.954 00:15:24.954 Latency(us) 00:15:24.954 [2024-12-13T13:00:05.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.954 [2024-12-13T13:00:05.730Z] =================================================================================================================== 00:15:24.954 [2024-12-13T13:00:05.730Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.954 13:00:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:24.954 13:00:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:24.954 13:00:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84895' 00:15:24.954 13:00:05 -- common/autotest_common.sh@955 -- # kill 84895 00:15:24.954 13:00:05 -- common/autotest_common.sh@960 -- # wait 84895 00:15:25.226 13:00:05 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:25.226 13:00:05 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:25.226 13:00:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:25.226 13:00:05 -- nvmf/common.sh@116 -- # sync 00:15:25.226 13:00:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:25.226 13:00:05 -- nvmf/common.sh@119 -- # set +e 00:15:25.226 13:00:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:25.226 13:00:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:25.226 rmmod nvme_tcp 00:15:25.226 rmmod nvme_fabrics 00:15:25.226 rmmod nvme_keyring 00:15:25.226 13:00:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:25.226 13:00:05 -- nvmf/common.sh@123 -- # set -e 00:15:25.226 13:00:05 -- nvmf/common.sh@124 -- # return 0 00:15:25.226 13:00:05 -- nvmf/common.sh@477 -- # '[' -n 84845 ']' 00:15:25.226 13:00:05 -- nvmf/common.sh@478 -- # killprocess 84845 00:15:25.226 13:00:05 -- common/autotest_common.sh@936 -- # '[' -z 84845 ']' 00:15:25.226 13:00:05 -- common/autotest_common.sh@940 -- # kill -0 84845 00:15:25.226 13:00:05 -- common/autotest_common.sh@941 -- # uname 00:15:25.226 13:00:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:25.226 13:00:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84845 00:15:25.226 killing process with pid 84845 00:15:25.226 13:00:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:25.226 13:00:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:25.226 13:00:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84845' 00:15:25.226 13:00:05 -- common/autotest_common.sh@955 -- # kill 84845 00:15:25.226 13:00:05 -- common/autotest_common.sh@960 -- # wait 84845 00:15:25.485 13:00:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.485 13:00:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.485 13:00:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.485 13:00:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.485 13:00:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.485 13:00:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.485 13:00:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.485 13:00:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.485 13:00:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:25.485 ************************************ 00:15:25.485 END TEST nvmf_queue_depth 00:15:25.485 ************************************ 00:15:25.485 00:15:25.485 real 0m13.629s 00:15:25.485 user 0m23.123s 00:15:25.485 sys 0m2.264s 00:15:25.485 13:00:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:25.485 13:00:06 -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 13:00:06 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:25.485 13:00:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:25.485 13:00:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.485 13:00:06 -- common/autotest_common.sh@10 -- # set +x 00:15:25.744 ************************************ 00:15:25.744 START TEST nvmf_multipath 00:15:25.744 ************************************ 00:15:25.744 13:00:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:25.744 * Looking for test storage... 00:15:25.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.744 13:00:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:25.744 13:00:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:25.744 13:00:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:25.744 13:00:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:25.744 13:00:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:25.744 13:00:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:25.744 13:00:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:25.744 13:00:06 -- scripts/common.sh@335 -- # IFS=.-: 00:15:25.744 13:00:06 -- scripts/common.sh@335 -- # read -ra ver1 00:15:25.744 13:00:06 -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.744 13:00:06 -- scripts/common.sh@336 -- # read -ra ver2 00:15:25.744 13:00:06 -- scripts/common.sh@337 -- # local 'op=<' 00:15:25.744 13:00:06 -- scripts/common.sh@339 -- # ver1_l=2 00:15:25.744 13:00:06 -- scripts/common.sh@340 -- # ver2_l=1 00:15:25.744 13:00:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:25.744 13:00:06 -- scripts/common.sh@343 -- # case "$op" in 00:15:25.744 13:00:06 -- scripts/common.sh@344 -- # : 1 00:15:25.744 13:00:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:25.744 13:00:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.744 13:00:06 -- scripts/common.sh@364 -- # decimal 1 00:15:25.744 13:00:06 -- scripts/common.sh@352 -- # local d=1 00:15:25.744 13:00:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.744 13:00:06 -- scripts/common.sh@354 -- # echo 1 00:15:25.744 13:00:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:25.744 13:00:06 -- scripts/common.sh@365 -- # decimal 2 00:15:25.744 13:00:06 -- scripts/common.sh@352 -- # local d=2 00:15:25.744 13:00:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.744 13:00:06 -- scripts/common.sh@354 -- # echo 2 00:15:25.744 13:00:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:25.744 13:00:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:25.744 13:00:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:25.744 13:00:06 -- scripts/common.sh@367 -- # return 0 00:15:25.744 13:00:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.744 13:00:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:25.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.744 --rc genhtml_branch_coverage=1 00:15:25.744 --rc genhtml_function_coverage=1 00:15:25.744 --rc genhtml_legend=1 00:15:25.744 --rc geninfo_all_blocks=1 00:15:25.744 --rc geninfo_unexecuted_blocks=1 00:15:25.744 00:15:25.744 ' 00:15:25.744 13:00:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:25.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.744 --rc genhtml_branch_coverage=1 00:15:25.744 --rc genhtml_function_coverage=1 00:15:25.744 --rc genhtml_legend=1 00:15:25.744 --rc geninfo_all_blocks=1 00:15:25.744 --rc geninfo_unexecuted_blocks=1 00:15:25.744 00:15:25.744 ' 00:15:25.744 13:00:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:25.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.744 --rc genhtml_branch_coverage=1 00:15:25.744 --rc genhtml_function_coverage=1 00:15:25.744 --rc genhtml_legend=1 00:15:25.744 --rc geninfo_all_blocks=1 00:15:25.744 --rc geninfo_unexecuted_blocks=1 00:15:25.744 00:15:25.744 ' 00:15:25.744 13:00:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:25.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.744 --rc genhtml_branch_coverage=1 00:15:25.744 --rc genhtml_function_coverage=1 00:15:25.744 --rc genhtml_legend=1 00:15:25.744 --rc geninfo_all_blocks=1 00:15:25.744 --rc geninfo_unexecuted_blocks=1 00:15:25.744 00:15:25.744 ' 00:15:25.744 13:00:06 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.744 13:00:06 -- nvmf/common.sh@7 -- # uname -s 00:15:25.744 13:00:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.744 13:00:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.744 13:00:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.744 13:00:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.744 13:00:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.744 13:00:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.744 13:00:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.744 13:00:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.744 13:00:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.744 13:00:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.744 13:00:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:15:25.744 13:00:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:15:25.744 13:00:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.744 13:00:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.744 13:00:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.744 13:00:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.744 13:00:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.744 13:00:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.744 13:00:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.744 13:00:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.745 13:00:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.745 13:00:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.745 13:00:06 -- paths/export.sh@5 -- # export PATH 00:15:25.745 13:00:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.745 13:00:06 -- nvmf/common.sh@46 -- # : 0 00:15:25.745 13:00:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:25.745 13:00:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:25.745 13:00:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:25.745 13:00:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.745 13:00:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.745 13:00:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:25.745 13:00:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:25.745 13:00:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:25.745 13:00:06 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.745 13:00:06 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.745 13:00:06 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:25.745 13:00:06 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.745 13:00:06 -- target/multipath.sh@43 -- # nvmftestinit 00:15:25.745 13:00:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:25.745 13:00:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.745 13:00:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:25.745 13:00:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:25.745 13:00:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:25.745 13:00:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.745 13:00:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.745 13:00:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.745 13:00:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:25.745 13:00:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:25.745 13:00:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:25.745 13:00:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:25.745 13:00:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:25.745 13:00:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:25.745 13:00:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.745 13:00:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.745 13:00:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.745 13:00:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:25.745 13:00:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.745 13:00:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.745 13:00:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.745 13:00:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.745 13:00:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.745 13:00:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.745 13:00:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.745 13:00:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.745 13:00:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:25.745 13:00:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:25.745 Cannot find device "nvmf_tgt_br" 00:15:25.745 13:00:06 -- nvmf/common.sh@154 -- # true 00:15:25.745 13:00:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.003 Cannot find device "nvmf_tgt_br2" 00:15:26.003 13:00:06 -- nvmf/common.sh@155 -- # true 00:15:26.003 13:00:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:26.003 13:00:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:26.003 Cannot find device "nvmf_tgt_br" 00:15:26.003 13:00:06 -- nvmf/common.sh@157 -- # true 00:15:26.003 13:00:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:26.003 Cannot find device "nvmf_tgt_br2" 00:15:26.003 13:00:06 -- nvmf/common.sh@158 -- # true 00:15:26.003 13:00:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:26.003 13:00:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:26.003 13:00:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.003 13:00:06 -- nvmf/common.sh@161 -- # true 00:15:26.003 13:00:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.003 13:00:06 -- nvmf/common.sh@162 -- # true 00:15:26.003 13:00:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:26.003 13:00:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:26.003 13:00:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:26.003 13:00:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:26.003 13:00:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.003 13:00:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.003 13:00:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.003 13:00:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:26.003 13:00:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:26.003 13:00:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:26.003 13:00:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:26.003 13:00:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:26.003 13:00:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:26.003 13:00:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.003 13:00:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.003 13:00:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.003 13:00:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:26.003 13:00:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:26.003 13:00:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.003 13:00:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.003 13:00:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.261 13:00:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.261 13:00:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.261 13:00:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:26.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:15:26.261 00:15:26.261 --- 10.0.0.2 ping statistics --- 00:15:26.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.261 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:26.261 13:00:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:26.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:15:26.261 00:15:26.261 --- 10.0.0.3 ping statistics --- 00:15:26.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.261 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:26.261 13:00:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:26.261 00:15:26.261 --- 10.0.0.1 ping statistics --- 00:15:26.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.261 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:26.261 13:00:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.261 13:00:06 -- nvmf/common.sh@421 -- # return 0 00:15:26.261 13:00:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:26.261 13:00:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.261 13:00:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:26.261 13:00:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:26.261 13:00:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.261 13:00:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:26.261 13:00:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:26.261 13:00:06 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:26.261 13:00:06 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:26.261 13:00:06 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:26.261 13:00:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:26.261 13:00:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.261 13:00:06 -- common/autotest_common.sh@10 -- # set +x 00:15:26.261 13:00:06 -- nvmf/common.sh@469 -- # nvmfpid=85236 00:15:26.261 13:00:06 -- nvmf/common.sh@470 -- # waitforlisten 85236 00:15:26.262 13:00:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:26.262 13:00:06 -- common/autotest_common.sh@829 -- # '[' -z 85236 ']' 00:15:26.262 13:00:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.262 13:00:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.262 13:00:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.262 13:00:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.262 13:00:06 -- common/autotest_common.sh@10 -- # set +x 00:15:26.262 [2024-12-13 13:00:06.882138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:26.262 [2024-12-13 13:00:06.882397] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.262 [2024-12-13 13:00:07.016403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.520 [2024-12-13 13:00:07.078364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:26.520 [2024-12-13 13:00:07.078781] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.520 [2024-12-13 13:00:07.078802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.520 [2024-12-13 13:00:07.078811] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.520 [2024-12-13 13:00:07.078976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.520 [2024-12-13 13:00:07.079112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.520 [2024-12-13 13:00:07.079336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.520 [2024-12-13 13:00:07.079341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.456 13:00:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.456 13:00:07 -- common/autotest_common.sh@862 -- # return 0 00:15:27.456 13:00:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.456 13:00:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.456 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:15:27.456 13:00:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.456 13:00:07 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:27.456 [2024-12-13 13:00:08.189323] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.456 13:00:08 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:27.715 Malloc0 00:15:27.715 13:00:08 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:27.974 13:00:08 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:28.233 13:00:08 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.491 [2024-12-13 13:00:09.115141] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.491 13:00:09 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:28.750 [2024-12-13 13:00:09.331388] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.750 13:00:09 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:29.009 13:00:09 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:29.009 13:00:09 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.009 13:00:09 -- common/autotest_common.sh@1187 -- # local i=0 00:15:29.009 13:00:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.009 13:00:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:29.009 13:00:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:31.541 13:00:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:31.541 13:00:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:31.541 13:00:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:31.541 13:00:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:31.541 13:00:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:31.541 13:00:11 -- common/autotest_common.sh@1197 -- # return 0 00:15:31.541 13:00:11 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:31.541 13:00:11 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:31.541 13:00:11 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:31.541 13:00:11 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:31.541 13:00:11 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:31.541 13:00:11 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:31.541 13:00:11 -- target/multipath.sh@38 -- # return 0 00:15:31.541 13:00:11 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:31.541 13:00:11 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:31.541 13:00:11 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:31.541 13:00:11 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:31.541 13:00:11 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:31.541 13:00:11 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:31.541 13:00:11 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:31.541 13:00:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:31.541 13:00:11 -- target/multipath.sh@22 -- # local timeout=20 00:15:31.541 13:00:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:31.541 13:00:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:31.541 13:00:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:31.541 13:00:11 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:31.541 13:00:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:31.541 13:00:11 -- target/multipath.sh@22 -- # local timeout=20 00:15:31.541 13:00:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:31.541 13:00:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:31.541 13:00:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:31.541 13:00:11 -- target/multipath.sh@85 -- # echo numa 00:15:31.541 13:00:11 -- target/multipath.sh@88 -- # fio_pid=85368 00:15:31.541 13:00:11 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:31.541 13:00:11 -- target/multipath.sh@90 -- # sleep 1 00:15:31.541 [global] 00:15:31.541 thread=1 00:15:31.541 invalidate=1 00:15:31.541 rw=randrw 00:15:31.541 time_based=1 00:15:31.541 runtime=6 00:15:31.541 ioengine=libaio 00:15:31.541 direct=1 00:15:31.541 bs=4096 00:15:31.541 iodepth=128 00:15:31.541 norandommap=0 00:15:31.541 numjobs=1 00:15:31.541 00:15:31.541 verify_dump=1 00:15:31.541 verify_backlog=512 00:15:31.541 verify_state_save=0 00:15:31.541 do_verify=1 00:15:31.541 verify=crc32c-intel 00:15:31.541 [job0] 00:15:31.541 filename=/dev/nvme0n1 00:15:31.541 Could not set queue depth (nvme0n1) 00:15:31.541 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.541 fio-3.35 00:15:31.541 Starting 1 thread 00:15:32.108 13:00:12 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:32.367 13:00:13 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:32.625 13:00:13 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:32.625 13:00:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:32.625 13:00:13 -- target/multipath.sh@22 -- # local timeout=20 00:15:32.625 13:00:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:32.625 13:00:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:32.625 13:00:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:32.625 13:00:13 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:32.626 13:00:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:32.626 13:00:13 -- target/multipath.sh@22 -- # local timeout=20 00:15:32.626 13:00:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:32.626 13:00:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:32.626 13:00:13 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:32.626 13:00:13 -- target/multipath.sh@25 -- # sleep 1s 00:15:34.002 13:00:14 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:34.002 13:00:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:34.002 13:00:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:34.002 13:00:14 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:34.002 13:00:14 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:34.261 13:00:14 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:34.261 13:00:14 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:34.261 13:00:14 -- target/multipath.sh@22 -- # local timeout=20 00:15:34.261 13:00:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:34.261 13:00:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:34.261 13:00:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:34.261 13:00:14 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:34.261 13:00:14 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:34.261 13:00:14 -- target/multipath.sh@22 -- # local timeout=20 00:15:34.261 13:00:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:34.261 13:00:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:34.261 13:00:14 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:34.261 13:00:14 -- target/multipath.sh@25 -- # sleep 1s 00:15:35.203 13:00:15 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:35.203 13:00:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:35.203 13:00:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:35.203 13:00:15 -- target/multipath.sh@104 -- # wait 85368 00:15:37.805 00:15:37.805 job0: (groupid=0, jobs=1): err= 0: pid=85400: Fri Dec 13 13:00:18 2024 00:15:37.805 read: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(287MiB/5999msec) 00:15:37.805 slat (usec): min=5, max=7300, avg=46.77, stdev=211.94 00:15:37.805 clat (usec): min=1312, max=14140, avg=7165.01, stdev=1127.77 00:15:37.805 lat (usec): min=1652, max=14152, avg=7211.79, stdev=1136.03 00:15:37.805 clat percentiles (usec): 00:15:37.805 | 1.00th=[ 4359], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6390], 00:15:37.805 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7373], 00:15:37.805 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 8979], 00:15:37.805 | 99.00th=[10683], 99.50th=[11076], 99.90th=[11994], 99.95th=[12649], 00:15:37.805 | 99.99th=[13173] 00:15:37.805 bw ( KiB/s): min=16360, max=30176, per=51.24%, avg=25110.55, stdev=5047.55, samples=11 00:15:37.805 iops : min= 4090, max= 7544, avg=6277.64, stdev=1261.89, samples=11 00:15:37.805 write: IOPS=7071, BW=27.6MiB/s (29.0MB/s)(147MiB/5319msec); 0 zone resets 00:15:37.805 slat (usec): min=7, max=2534, avg=58.50, stdev=146.42 00:15:37.805 clat (usec): min=476, max=12747, avg=6216.37, stdev=967.63 00:15:37.805 lat (usec): min=1258, max=12770, avg=6274.87, stdev=970.60 00:15:37.805 clat percentiles (usec): 00:15:37.805 | 1.00th=[ 3458], 5.00th=[ 4359], 10.00th=[ 5211], 20.00th=[ 5669], 00:15:37.805 | 30.00th=[ 5866], 40.00th=[ 6063], 50.00th=[ 6259], 60.00th=[ 6456], 00:15:37.805 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7111], 95.00th=[ 7439], 00:15:37.805 | 99.00th=[ 9241], 99.50th=[ 9765], 99.90th=[11338], 99.95th=[11731], 00:15:37.805 | 99.99th=[12256] 00:15:37.805 bw ( KiB/s): min=16592, max=29400, per=88.79%, avg=25115.64, stdev=4593.94, samples=11 00:15:37.805 iops : min= 4148, max= 7350, avg=6278.91, stdev=1148.49, samples=11 00:15:37.805 lat (usec) : 500=0.01% 00:15:37.805 lat (msec) : 2=0.03%, 4=1.40%, 10=96.88%, 20=1.68% 00:15:37.805 cpu : usr=6.08%, sys=22.96%, ctx=6739, majf=0, minf=90 00:15:37.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:37.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:37.805 issued rwts: total=73498,37613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:37.805 00:15:37.805 Run status group 0 (all jobs): 00:15:37.805 READ: bw=47.9MiB/s (50.2MB/s), 47.9MiB/s-47.9MiB/s (50.2MB/s-50.2MB/s), io=287MiB (301MB), run=5999-5999msec 00:15:37.805 WRITE: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=147MiB (154MB), run=5319-5319msec 00:15:37.805 00:15:37.805 Disk stats (read/write): 00:15:37.805 nvme0n1: ios=71991/37439, merge=0/0, ticks=482119/216911, in_queue=699030, util=98.56% 00:15:37.805 13:00:18 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:37.805 13:00:18 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:38.064 13:00:18 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:38.064 13:00:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:38.064 13:00:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:38.064 13:00:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:38.064 13:00:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:38.064 13:00:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:38.064 13:00:18 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:38.064 13:00:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:38.064 13:00:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:38.064 13:00:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:38.064 13:00:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:38.064 13:00:18 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:38.064 13:00:18 -- target/multipath.sh@25 -- # sleep 1s 00:15:39.000 13:00:19 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:39.000 13:00:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:39.000 13:00:19 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:39.000 13:00:19 -- target/multipath.sh@113 -- # echo round-robin 00:15:39.000 13:00:19 -- target/multipath.sh@116 -- # fio_pid=85524 00:15:39.000 13:00:19 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:39.000 13:00:19 -- target/multipath.sh@118 -- # sleep 1 00:15:39.000 [global] 00:15:39.000 thread=1 00:15:39.000 invalidate=1 00:15:39.000 rw=randrw 00:15:39.000 time_based=1 00:15:39.000 runtime=6 00:15:39.000 ioengine=libaio 00:15:39.000 direct=1 00:15:39.000 bs=4096 00:15:39.000 iodepth=128 00:15:39.000 norandommap=0 00:15:39.000 numjobs=1 00:15:39.000 00:15:39.000 verify_dump=1 00:15:39.000 verify_backlog=512 00:15:39.000 verify_state_save=0 00:15:39.000 do_verify=1 00:15:39.000 verify=crc32c-intel 00:15:39.000 [job0] 00:15:39.000 filename=/dev/nvme0n1 00:15:39.259 Could not set queue depth (nvme0n1) 00:15:39.259 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:39.259 fio-3.35 00:15:39.259 Starting 1 thread 00:15:40.196 13:00:20 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:40.454 13:00:21 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:40.713 13:00:21 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:40.713 13:00:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:40.713 13:00:21 -- target/multipath.sh@22 -- # local timeout=20 00:15:40.713 13:00:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:40.713 13:00:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:40.713 13:00:21 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:40.713 13:00:21 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:40.713 13:00:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:40.713 13:00:21 -- target/multipath.sh@22 -- # local timeout=20 00:15:40.713 13:00:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:40.713 13:00:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:40.713 13:00:21 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:40.713 13:00:21 -- target/multipath.sh@25 -- # sleep 1s 00:15:41.649 13:00:22 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:41.649 13:00:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:41.649 13:00:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:41.649 13:00:22 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:41.907 13:00:22 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:42.166 13:00:22 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:42.166 13:00:22 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:42.166 13:00:22 -- target/multipath.sh@22 -- # local timeout=20 00:15:42.166 13:00:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:42.166 13:00:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:42.166 13:00:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:42.166 13:00:22 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:42.166 13:00:22 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:42.166 13:00:22 -- target/multipath.sh@22 -- # local timeout=20 00:15:42.166 13:00:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:42.166 13:00:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:42.166 13:00:22 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:42.166 13:00:22 -- target/multipath.sh@25 -- # sleep 1s 00:15:43.102 13:00:23 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:43.102 13:00:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:43.102 13:00:23 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:43.102 13:00:23 -- target/multipath.sh@132 -- # wait 85524 00:15:45.640 00:15:45.640 job0: (groupid=0, jobs=1): err= 0: pid=85550: Fri Dec 13 13:00:26 2024 00:15:45.640 read: IOPS=13.2k, BW=51.7MiB/s (54.2MB/s)(311MiB/6003msec) 00:15:45.640 slat (usec): min=2, max=6154, avg=39.46, stdev=194.67 00:15:45.640 clat (usec): min=475, max=13957, avg=6759.25, stdev=1430.27 00:15:45.640 lat (usec): min=601, max=13985, avg=6798.71, stdev=1444.74 00:15:45.640 clat percentiles (usec): 00:15:45.640 | 1.00th=[ 3130], 5.00th=[ 4228], 10.00th=[ 4817], 20.00th=[ 5735], 00:15:45.640 | 30.00th=[ 6259], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 7111], 00:15:45.640 | 70.00th=[ 7439], 80.00th=[ 7832], 90.00th=[ 8356], 95.00th=[ 8848], 00:15:45.640 | 99.00th=[10552], 99.50th=[10945], 99.90th=[12256], 99.95th=[12911], 00:15:45.640 | 99.99th=[13698] 00:15:45.640 bw ( KiB/s): min=16208, max=45384, per=52.71%, avg=27923.55, stdev=10363.59, samples=11 00:15:45.640 iops : min= 4052, max=11346, avg=6980.82, stdev=2590.90, samples=11 00:15:45.640 write: IOPS=7934, BW=31.0MiB/s (32.5MB/s)(158MiB/5096msec); 0 zone resets 00:15:45.640 slat (usec): min=3, max=5781, avg=49.80, stdev=126.64 00:15:45.640 clat (usec): min=417, max=12946, avg=5551.32, stdev=1490.92 00:15:45.640 lat (usec): min=485, max=12969, avg=5601.12, stdev=1503.02 00:15:45.640 clat percentiles (usec): 00:15:45.640 | 1.00th=[ 2311], 5.00th=[ 2999], 10.00th=[ 3392], 20.00th=[ 3982], 00:15:45.640 | 30.00th=[ 4621], 40.00th=[ 5538], 50.00th=[ 5997], 60.00th=[ 6259], 00:15:45.640 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7111], 95.00th=[ 7373], 00:15:45.640 | 99.00th=[ 8848], 99.50th=[ 9634], 99.90th=[11207], 99.95th=[11731], 00:15:45.640 | 99.99th=[12518] 00:15:45.640 bw ( KiB/s): min=17112, max=45816, per=87.91%, avg=27900.36, stdev=10040.01, samples=11 00:15:45.640 iops : min= 4278, max=11454, avg=6975.09, stdev=2510.00, samples=11 00:15:45.640 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:45.640 lat (msec) : 2=0.19%, 4=9.08%, 10=89.33%, 20=1.39% 00:15:45.640 cpu : usr=6.75%, sys=24.73%, ctx=7710, majf=0, minf=127 00:15:45.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:45.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.640 issued rwts: total=79495,40432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.640 00:15:45.640 Run status group 0 (all jobs): 00:15:45.640 READ: bw=51.7MiB/s (54.2MB/s), 51.7MiB/s-51.7MiB/s (54.2MB/s-54.2MB/s), io=311MiB (326MB), run=6003-6003msec 00:15:45.640 WRITE: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=158MiB (166MB), run=5096-5096msec 00:15:45.640 00:15:45.640 Disk stats (read/write): 00:15:45.640 nvme0n1: ios=78807/39596, merge=0/0, ticks=492456/200291, in_queue=692747, util=98.60% 00:15:45.640 13:00:26 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:45.640 13:00:26 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:45.640 13:00:26 -- common/autotest_common.sh@1208 -- # local i=0 00:15:45.640 13:00:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:45.640 13:00:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.640 13:00:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:45.640 13:00:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.640 13:00:26 -- common/autotest_common.sh@1220 -- # return 0 00:15:45.640 13:00:26 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.898 13:00:26 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:45.898 13:00:26 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:45.898 13:00:26 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:45.898 13:00:26 -- target/multipath.sh@144 -- # nvmftestfini 00:15:45.898 13:00:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:45.898 13:00:26 -- nvmf/common.sh@116 -- # sync 00:15:45.898 13:00:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:45.898 13:00:26 -- nvmf/common.sh@119 -- # set +e 00:15:45.898 13:00:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:45.898 13:00:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:45.898 rmmod nvme_tcp 00:15:45.898 rmmod nvme_fabrics 00:15:45.898 rmmod nvme_keyring 00:15:45.898 13:00:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:45.898 13:00:26 -- nvmf/common.sh@123 -- # set -e 00:15:45.898 13:00:26 -- nvmf/common.sh@124 -- # return 0 00:15:45.898 13:00:26 -- nvmf/common.sh@477 -- # '[' -n 85236 ']' 00:15:45.899 13:00:26 -- nvmf/common.sh@478 -- # killprocess 85236 00:15:45.899 13:00:26 -- common/autotest_common.sh@936 -- # '[' -z 85236 ']' 00:15:45.899 13:00:26 -- common/autotest_common.sh@940 -- # kill -0 85236 00:15:45.899 13:00:26 -- common/autotest_common.sh@941 -- # uname 00:15:45.899 13:00:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:45.899 13:00:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85236 00:15:45.899 killing process with pid 85236 00:15:45.899 13:00:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:45.899 13:00:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:45.899 13:00:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85236' 00:15:45.899 13:00:26 -- common/autotest_common.sh@955 -- # kill 85236 00:15:45.899 13:00:26 -- common/autotest_common.sh@960 -- # wait 85236 00:15:46.157 13:00:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:46.157 13:00:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:46.157 13:00:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:46.157 13:00:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.157 13:00:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:46.157 13:00:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.157 13:00:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.157 13:00:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.157 13:00:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:46.157 00:15:46.157 real 0m20.557s 00:15:46.157 user 1m19.937s 00:15:46.157 sys 0m7.086s 00:15:46.157 13:00:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:46.157 13:00:26 -- common/autotest_common.sh@10 -- # set +x 00:15:46.157 ************************************ 00:15:46.157 END TEST nvmf_multipath 00:15:46.157 ************************************ 00:15:46.157 13:00:26 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:46.157 13:00:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:46.157 13:00:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:46.157 13:00:26 -- common/autotest_common.sh@10 -- # set +x 00:15:46.157 ************************************ 00:15:46.157 START TEST nvmf_zcopy 00:15:46.157 ************************************ 00:15:46.157 13:00:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:46.417 * Looking for test storage... 00:15:46.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:46.417 13:00:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:46.417 13:00:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:46.417 13:00:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:46.417 13:00:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:46.417 13:00:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:46.417 13:00:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:46.417 13:00:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:46.417 13:00:27 -- scripts/common.sh@335 -- # IFS=.-: 00:15:46.417 13:00:27 -- scripts/common.sh@335 -- # read -ra ver1 00:15:46.417 13:00:27 -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.417 13:00:27 -- scripts/common.sh@336 -- # read -ra ver2 00:15:46.417 13:00:27 -- scripts/common.sh@337 -- # local 'op=<' 00:15:46.417 13:00:27 -- scripts/common.sh@339 -- # ver1_l=2 00:15:46.417 13:00:27 -- scripts/common.sh@340 -- # ver2_l=1 00:15:46.417 13:00:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:46.417 13:00:27 -- scripts/common.sh@343 -- # case "$op" in 00:15:46.417 13:00:27 -- scripts/common.sh@344 -- # : 1 00:15:46.417 13:00:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:46.417 13:00:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.417 13:00:27 -- scripts/common.sh@364 -- # decimal 1 00:15:46.417 13:00:27 -- scripts/common.sh@352 -- # local d=1 00:15:46.417 13:00:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.417 13:00:27 -- scripts/common.sh@354 -- # echo 1 00:15:46.417 13:00:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:46.417 13:00:27 -- scripts/common.sh@365 -- # decimal 2 00:15:46.417 13:00:27 -- scripts/common.sh@352 -- # local d=2 00:15:46.417 13:00:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.417 13:00:27 -- scripts/common.sh@354 -- # echo 2 00:15:46.417 13:00:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:46.417 13:00:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:46.417 13:00:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:46.417 13:00:27 -- scripts/common.sh@367 -- # return 0 00:15:46.417 13:00:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.417 13:00:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:46.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.417 --rc genhtml_branch_coverage=1 00:15:46.417 --rc genhtml_function_coverage=1 00:15:46.417 --rc genhtml_legend=1 00:15:46.417 --rc geninfo_all_blocks=1 00:15:46.417 --rc geninfo_unexecuted_blocks=1 00:15:46.417 00:15:46.417 ' 00:15:46.417 13:00:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:46.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.417 --rc genhtml_branch_coverage=1 00:15:46.417 --rc genhtml_function_coverage=1 00:15:46.417 --rc genhtml_legend=1 00:15:46.417 --rc geninfo_all_blocks=1 00:15:46.417 --rc geninfo_unexecuted_blocks=1 00:15:46.417 00:15:46.417 ' 00:15:46.417 13:00:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:46.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.417 --rc genhtml_branch_coverage=1 00:15:46.417 --rc genhtml_function_coverage=1 00:15:46.417 --rc genhtml_legend=1 00:15:46.417 --rc geninfo_all_blocks=1 00:15:46.417 --rc geninfo_unexecuted_blocks=1 00:15:46.417 00:15:46.417 ' 00:15:46.417 13:00:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:46.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.417 --rc genhtml_branch_coverage=1 00:15:46.417 --rc genhtml_function_coverage=1 00:15:46.417 --rc genhtml_legend=1 00:15:46.417 --rc geninfo_all_blocks=1 00:15:46.417 --rc geninfo_unexecuted_blocks=1 00:15:46.417 00:15:46.417 ' 00:15:46.417 13:00:27 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.417 13:00:27 -- nvmf/common.sh@7 -- # uname -s 00:15:46.417 13:00:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.417 13:00:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.417 13:00:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.417 13:00:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.417 13:00:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.417 13:00:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.417 13:00:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.417 13:00:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.417 13:00:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.417 13:00:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.417 13:00:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:15:46.417 13:00:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:15:46.417 13:00:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.417 13:00:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.417 13:00:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.417 13:00:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.417 13:00:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.417 13:00:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.417 13:00:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.417 13:00:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.417 13:00:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.417 13:00:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.417 13:00:27 -- paths/export.sh@5 -- # export PATH 00:15:46.417 13:00:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.417 13:00:27 -- nvmf/common.sh@46 -- # : 0 00:15:46.417 13:00:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:46.417 13:00:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:46.417 13:00:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:46.417 13:00:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.417 13:00:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.417 13:00:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:46.417 13:00:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:46.417 13:00:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:46.417 13:00:27 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:46.417 13:00:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:46.417 13:00:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.417 13:00:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:46.417 13:00:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:46.417 13:00:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:46.417 13:00:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.417 13:00:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.417 13:00:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.417 13:00:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:46.417 13:00:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:46.417 13:00:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:46.417 13:00:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:46.417 13:00:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:46.417 13:00:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:46.417 13:00:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.417 13:00:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.417 13:00:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:46.417 13:00:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:46.417 13:00:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.417 13:00:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.417 13:00:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.417 13:00:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.417 13:00:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.417 13:00:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.417 13:00:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.417 13:00:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.417 13:00:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:46.417 13:00:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:46.417 Cannot find device "nvmf_tgt_br" 00:15:46.417 13:00:27 -- nvmf/common.sh@154 -- # true 00:15:46.417 13:00:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.417 Cannot find device "nvmf_tgt_br2" 00:15:46.417 13:00:27 -- nvmf/common.sh@155 -- # true 00:15:46.417 13:00:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:46.418 13:00:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:46.418 Cannot find device "nvmf_tgt_br" 00:15:46.418 13:00:27 -- nvmf/common.sh@157 -- # true 00:15:46.418 13:00:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:46.418 Cannot find device "nvmf_tgt_br2" 00:15:46.418 13:00:27 -- nvmf/common.sh@158 -- # true 00:15:46.418 13:00:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:46.418 13:00:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:46.677 13:00:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.677 13:00:27 -- nvmf/common.sh@161 -- # true 00:15:46.677 13:00:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.677 13:00:27 -- nvmf/common.sh@162 -- # true 00:15:46.677 13:00:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.677 13:00:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.677 13:00:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.677 13:00:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.677 13:00:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.677 13:00:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.677 13:00:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.677 13:00:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:46.677 13:00:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:46.677 13:00:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:46.677 13:00:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:46.677 13:00:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:46.677 13:00:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:46.677 13:00:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.677 13:00:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.677 13:00:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.677 13:00:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:46.677 13:00:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:46.677 13:00:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.677 13:00:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.677 13:00:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.677 13:00:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.677 13:00:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.677 13:00:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:46.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:46.677 00:15:46.677 --- 10.0.0.2 ping statistics --- 00:15:46.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.677 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:46.677 13:00:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:46.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:46.677 00:15:46.677 --- 10.0.0.3 ping statistics --- 00:15:46.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.677 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:46.677 13:00:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:46.677 00:15:46.677 --- 10.0.0.1 ping statistics --- 00:15:46.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.677 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:46.677 13:00:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.677 13:00:27 -- nvmf/common.sh@421 -- # return 0 00:15:46.677 13:00:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:46.677 13:00:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.677 13:00:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:46.677 13:00:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:46.677 13:00:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.677 13:00:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:46.677 13:00:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:46.677 13:00:27 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:46.677 13:00:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:46.677 13:00:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.677 13:00:27 -- common/autotest_common.sh@10 -- # set +x 00:15:46.677 13:00:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:46.677 13:00:27 -- nvmf/common.sh@469 -- # nvmfpid=85839 00:15:46.677 13:00:27 -- nvmf/common.sh@470 -- # waitforlisten 85839 00:15:46.677 13:00:27 -- common/autotest_common.sh@829 -- # '[' -z 85839 ']' 00:15:46.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.677 13:00:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.677 13:00:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.677 13:00:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.677 13:00:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.677 13:00:27 -- common/autotest_common.sh@10 -- # set +x 00:15:46.936 [2024-12-13 13:00:27.457114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:46.936 [2024-12-13 13:00:27.457719] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.936 [2024-12-13 13:00:27.594259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.936 [2024-12-13 13:00:27.661228] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:46.936 [2024-12-13 13:00:27.661404] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.936 [2024-12-13 13:00:27.661420] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.936 [2024-12-13 13:00:27.661431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.936 [2024-12-13 13:00:27.661470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.872 13:00:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.872 13:00:28 -- common/autotest_common.sh@862 -- # return 0 00:15:47.872 13:00:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:47.872 13:00:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.872 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 13:00:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.872 13:00:28 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:47.872 13:00:28 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:47.872 13:00:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.872 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 [2024-12-13 13:00:28.507333] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.872 13:00:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.872 13:00:28 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:47.872 13:00:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.872 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 13:00:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.872 13:00:28 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.872 13:00:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.872 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 [2024-12-13 13:00:28.523467] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.872 13:00:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.872 13:00:28 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:47.872 13:00:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.872 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 13:00:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.872 13:00:28 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:47.872 13:00:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.872 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 malloc0 00:15:47.872 13:00:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.872 13:00:28 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:47.872 13:00:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.872 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 13:00:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.872 13:00:28 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:47.872 13:00:28 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:47.872 13:00:28 -- nvmf/common.sh@520 -- # config=() 00:15:47.872 13:00:28 -- nvmf/common.sh@520 -- # local subsystem config 00:15:47.872 13:00:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:47.872 13:00:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:47.872 { 00:15:47.872 "params": { 00:15:47.872 "name": "Nvme$subsystem", 00:15:47.872 "trtype": "$TEST_TRANSPORT", 00:15:47.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:47.872 "adrfam": "ipv4", 00:15:47.872 "trsvcid": "$NVMF_PORT", 00:15:47.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:47.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:47.872 "hdgst": ${hdgst:-false}, 00:15:47.872 "ddgst": ${ddgst:-false} 00:15:47.872 }, 00:15:47.872 "method": "bdev_nvme_attach_controller" 00:15:47.872 } 00:15:47.872 EOF 00:15:47.872 )") 00:15:47.872 13:00:28 -- nvmf/common.sh@542 -- # cat 00:15:47.872 13:00:28 -- nvmf/common.sh@544 -- # jq . 00:15:47.872 13:00:28 -- nvmf/common.sh@545 -- # IFS=, 00:15:47.872 13:00:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:47.872 "params": { 00:15:47.872 "name": "Nvme1", 00:15:47.872 "trtype": "tcp", 00:15:47.872 "traddr": "10.0.0.2", 00:15:47.872 "adrfam": "ipv4", 00:15:47.872 "trsvcid": "4420", 00:15:47.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:47.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:47.872 "hdgst": false, 00:15:47.872 "ddgst": false 00:15:47.872 }, 00:15:47.872 "method": "bdev_nvme_attach_controller" 00:15:47.872 }' 00:15:47.872 [2024-12-13 13:00:28.602331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:47.872 [2024-12-13 13:00:28.602393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85890 ] 00:15:48.131 [2024-12-13 13:00:28.737785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.131 [2024-12-13 13:00:28.797508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.389 Running I/O for 10 seconds... 00:15:58.362 00:15:58.362 Latency(us) 00:15:58.362 [2024-12-13T13:00:39.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.362 [2024-12-13T13:00:39.138Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:58.362 Verification LBA range: start 0x0 length 0x1000 00:15:58.362 Nvme1n1 : 10.01 10972.65 85.72 0.00 0.00 11636.60 1243.69 20375.74 00:15:58.362 [2024-12-13T13:00:39.138Z] =================================================================================================================== 00:15:58.362 [2024-12-13T13:00:39.138Z] Total : 10972.65 85.72 0.00 0.00 11636.60 1243.69 20375.74 00:15:58.621 13:00:39 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:58.621 13:00:39 -- target/zcopy.sh@39 -- # perfpid=86006 00:15:58.621 13:00:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:58.621 13:00:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:58.621 13:00:39 -- common/autotest_common.sh@10 -- # set +x 00:15:58.621 13:00:39 -- nvmf/common.sh@520 -- # config=() 00:15:58.621 13:00:39 -- nvmf/common.sh@520 -- # local subsystem config 00:15:58.621 13:00:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:58.621 13:00:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:58.621 { 00:15:58.621 "params": { 00:15:58.621 "name": "Nvme$subsystem", 00:15:58.621 "trtype": "$TEST_TRANSPORT", 00:15:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:58.621 "adrfam": "ipv4", 00:15:58.621 "trsvcid": "$NVMF_PORT", 00:15:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:58.621 "hdgst": ${hdgst:-false}, 00:15:58.621 "ddgst": ${ddgst:-false} 00:15:58.621 }, 00:15:58.621 "method": "bdev_nvme_attach_controller" 00:15:58.621 } 00:15:58.621 EOF 00:15:58.621 )") 00:15:58.621 13:00:39 -- nvmf/common.sh@542 -- # cat 00:15:58.621 [2024-12-13 13:00:39.187739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.187808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 13:00:39 -- nvmf/common.sh@544 -- # jq . 00:15:58.621 13:00:39 -- nvmf/common.sh@545 -- # IFS=, 00:15:58.621 13:00:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:58.621 "params": { 00:15:58.621 "name": "Nvme1", 00:15:58.621 "trtype": "tcp", 00:15:58.621 "traddr": "10.0.0.2", 00:15:58.621 "adrfam": "ipv4", 00:15:58.621 "trsvcid": "4420", 00:15:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:58.621 "hdgst": false, 00:15:58.621 "ddgst": false 00:15:58.621 }, 00:15:58.621 "method": "bdev_nvme_attach_controller" 00:15:58.621 }' 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.199707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.199738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.211705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.211733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.223708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.223735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.233242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:58.621 [2024-12-13 13:00:39.233334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86006 ] 00:15:58.621 [2024-12-13 13:00:39.235710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.235956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.247717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.247952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.259732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.259909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.271733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.271930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.283735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.283788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.295733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.295783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.307739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.307788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.621 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.621 [2024-12-13 13:00:39.319782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.621 [2024-12-13 13:00:39.319807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.622 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.622 [2024-12-13 13:00:39.331765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.622 [2024-12-13 13:00:39.331805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.622 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.622 [2024-12-13 13:00:39.343774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.622 [2024-12-13 13:00:39.343797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.622 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.622 [2024-12-13 13:00:39.355807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.622 [2024-12-13 13:00:39.355971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.622 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.622 [2024-12-13 13:00:39.367813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.622 [2024-12-13 13:00:39.367843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.622 [2024-12-13 13:00:39.371790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.622 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.622 [2024-12-13 13:00:39.379820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.622 [2024-12-13 13:00:39.379849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.622 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.622 [2024-12-13 13:00:39.391817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.622 [2024-12-13 13:00:39.391854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.622 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.403805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.403830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.415819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.416000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.427818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.427846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.435741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.881 [2024-12-13 13:00:39.439831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.439860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.451832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.451860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.463825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.464021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.475845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.475873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.487849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.487879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.499834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.499862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.511823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.511850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.523826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.523856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.535847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.536041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.881 [2024-12-13 13:00:39.547844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.881 [2024-12-13 13:00:39.547887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.881 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.882 [2024-12-13 13:00:39.559886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.882 [2024-12-13 13:00:39.559916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.882 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.882 [2024-12-13 13:00:39.571870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.882 [2024-12-13 13:00:39.571899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.882 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.882 [2024-12-13 13:00:39.583873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.882 [2024-12-13 13:00:39.584055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.882 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.882 [2024-12-13 13:00:39.595892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.882 [2024-12-13 13:00:39.595926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.882 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.882 Running I/O for 5 seconds... 00:15:58.882 [2024-12-13 13:00:39.607875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.882 [2024-12-13 13:00:39.607903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.882 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.882 [2024-12-13 13:00:39.625026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.882 [2024-12-13 13:00:39.625059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.882 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.882 [2024-12-13 13:00:39.640621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.882 [2024-12-13 13:00:39.640816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.882 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.882 [2024-12-13 13:00:39.651738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.882 [2024-12-13 13:00:39.651820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.882 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-12-13 13:00:39.668402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-12-13 13:00:39.668453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-12-13 13:00:39.685859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-12-13 13:00:39.685914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-12-13 13:00:39.700175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-12-13 13:00:39.700223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-12-13 13:00:39.716131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-12-13 13:00:39.716180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-12-13 13:00:39.733306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-12-13 13:00:39.733355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-12-13 13:00:39.749474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-12-13 13:00:39.749523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-12-13 13:00:39.766354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-12-13 13:00:39.766403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-12-13 13:00:39.782378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-12-13 13:00:39.782427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-12-13 13:00:39.799274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-12-13 13:00:39.799324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-12-13 13:00:39.816227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-12-13 13:00:39.816276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-12-13 13:00:39.832443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-12-13 13:00:39.832492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-12-13 13:00:39.849214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-12-13 13:00:39.849263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-12-13 13:00:39.865593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-12-13 13:00:39.865643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-12-13 13:00:39.882487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-12-13 13:00:39.882537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-12-13 13:00:39.898494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-12-13 13:00:39.898544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:39.916722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:39.916784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.399 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:39.929579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:39.929629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.399 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:39.945498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:39.945547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.399 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:39.962855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:39.962916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.399 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:39.979075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:39.979108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.399 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:39.995559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:39.995609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.399 2024/12/13 13:00:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:40.012593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:40.012645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.399 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:40.029212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:40.029262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.399 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:40.044754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:40.044813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.399 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.399 [2024-12-13 13:00:40.061895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.399 [2024-12-13 13:00:40.061944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.400 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.400 [2024-12-13 13:00:40.077937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.400 [2024-12-13 13:00:40.077986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.400 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.400 [2024-12-13 13:00:40.095124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.400 [2024-12-13 13:00:40.095173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.400 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.400 [2024-12-13 13:00:40.111218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.400 [2024-12-13 13:00:40.111252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.400 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.400 [2024-12-13 13:00:40.128443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.400 [2024-12-13 13:00:40.128493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.400 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.400 [2024-12-13 13:00:40.144330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.400 [2024-12-13 13:00:40.144379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.400 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.400 [2024-12-13 13:00:40.161050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.400 [2024-12-13 13:00:40.161099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.400 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.177805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.177865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.193359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.193408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.208100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.208151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.219531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.219580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.235766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.235826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.252171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.252221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.268553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.268585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.284726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.284784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.301844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.301875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.318176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.318209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.335743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.335783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.352447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.352479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.659 [2024-12-13 13:00:40.366269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.659 [2024-12-13 13:00:40.366300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.659 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.660 [2024-12-13 13:00:40.381836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.660 [2024-12-13 13:00:40.381866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.660 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.660 [2024-12-13 13:00:40.398581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.660 [2024-12-13 13:00:40.398613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.660 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.660 [2024-12-13 13:00:40.415039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.660 [2024-12-13 13:00:40.415072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.660 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.660 [2024-12-13 13:00:40.432305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.660 [2024-12-13 13:00:40.432353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.446354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.446403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.462072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.462123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.478665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.478715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.495687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.495737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.512100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.512150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.527832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.527885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.539425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.539473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.555663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.555714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.572542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.572592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.588480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.588529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.604961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.605012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.622302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.622351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.638388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.638437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.655262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.655297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.671655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.919 [2024-12-13 13:00:40.671688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.919 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.919 [2024-12-13 13:00:40.687205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.920 [2024-12-13 13:00:40.687241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.920 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.178 [2024-12-13 13:00:40.698833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.178 [2024-12-13 13:00:40.698919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.178 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.178 [2024-12-13 13:00:40.714549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.178 [2024-12-13 13:00:40.714581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.178 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.731754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.731813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.746846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.746879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.758355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.758388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.773686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.773719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.790857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.790890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.806211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.806247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.821835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.821867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.838530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.838563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.854524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.854558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.871583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.871616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.888364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.888551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.905051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.905218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.921796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.921975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.179 [2024-12-13 13:00:40.938210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.179 [2024-12-13 13:00:40.938389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.179 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.438 [2024-12-13 13:00:40.955409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.438 [2024-12-13 13:00:40.955578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.438 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.438 [2024-12-13 13:00:40.970112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.438 [2024-12-13 13:00:40.970310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.438 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.438 [2024-12-13 13:00:40.986235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.438 [2024-12-13 13:00:40.986416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.438 2024/12/13 13:00:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.438 [2024-12-13 13:00:41.003337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.438 [2024-12-13 13:00:41.003538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.438 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.438 [2024-12-13 13:00:41.019469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.438 [2024-12-13 13:00:41.019505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.438 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.438 [2024-12-13 13:00:41.036288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.438 [2024-12-13 13:00:41.036321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.438 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.438 [2024-12-13 13:00:41.053403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.438 [2024-12-13 13:00:41.053583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.438 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.438 [2024-12-13 13:00:41.070190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.439 [2024-12-13 13:00:41.070224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.439 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.439 [2024-12-13 13:00:41.086589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.439 [2024-12-13 13:00:41.086622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.439 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.439 [2024-12-13 13:00:41.103897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.439 [2024-12-13 13:00:41.103929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.439 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.439 [2024-12-13 13:00:41.120412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.439 [2024-12-13 13:00:41.120445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.439 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.439 [2024-12-13 13:00:41.136907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.439 [2024-12-13 13:00:41.136940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.439 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.439 [2024-12-13 13:00:41.154021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.439 [2024-12-13 13:00:41.154054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.439 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.439 [2024-12-13 13:00:41.170276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.439 [2024-12-13 13:00:41.170309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.439 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.439 [2024-12-13 13:00:41.187588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.439 [2024-12-13 13:00:41.187787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.439 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.439 [2024-12-13 13:00:41.202905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.439 [2024-12-13 13:00:41.202939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.439 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.220011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.220045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.236835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.236867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.253676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.253708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.269807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.269840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.287010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.287045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.303760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.303819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.320660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.320693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.336947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.336980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.353472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.353506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.369604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.369638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.385762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.385806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.402557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.402590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.419154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.419190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.435021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.435058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.447226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.447275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.698 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.698 [2024-12-13 13:00:41.462798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.698 [2024-12-13 13:00:41.462864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.699 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.480693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.480726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.494442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.494476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.510678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.510710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.527380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.527428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.544005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.544041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.561385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.561568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.576688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.576916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.592461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.592639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.610028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.610062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.625631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.625664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.636679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.636712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.652138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.652171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.668815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.668847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.684961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.684993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.701516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.701549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.958 [2024-12-13 13:00:41.717778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.958 [2024-12-13 13:00:41.717810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.958 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.217 [2024-12-13 13:00:41.735401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.217 [2024-12-13 13:00:41.735582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.217 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.217 [2024-12-13 13:00:41.750374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.217 [2024-12-13 13:00:41.750552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.217 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.217 [2024-12-13 13:00:41.766012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.217 [2024-12-13 13:00:41.766045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.217 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.217 [2024-12-13 13:00:41.782707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.217 [2024-12-13 13:00:41.782770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.217 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.217 [2024-12-13 13:00:41.798901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.217 [2024-12-13 13:00:41.798936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.217 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.217 [2024-12-13 13:00:41.816329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.217 [2024-12-13 13:00:41.816361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.217 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.832742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.832805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.849054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.849274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.866256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.866288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.882213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.882245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.898860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.898904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.915869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.915914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.932461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.932506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.948853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.948899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.965058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.965103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.218 [2024-12-13 13:00:41.982561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.218 [2024-12-13 13:00:41.982608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.218 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:41.997852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:41.997910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.008965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.009011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.024997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.025042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.040198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.040243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.055998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.056044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.073112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.073157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.089320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.089367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.106318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.106363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.123562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.123607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.137923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.137967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.153550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.153595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.169638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.169683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.186612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.186659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.203465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.203509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.219986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.220030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.477 [2024-12-13 13:00:42.236664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.477 [2024-12-13 13:00:42.236710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.477 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.254466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.254511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.269702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.269747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.281055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.281101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.297188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.297233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.314090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.314135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.330261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.330308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.347225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.347272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.363902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.363948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.379986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.380030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.397037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.397082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.413212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.413257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.429781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.429825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.445768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.445811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.457200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.457244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.473550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.473597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.487690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.487735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.737 [2024-12-13 13:00:42.504254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.737 [2024-12-13 13:00:42.504298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.737 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.996 [2024-12-13 13:00:42.519744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.996 [2024-12-13 13:00:42.519790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.996 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.996 [2024-12-13 13:00:42.531105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.996 [2024-12-13 13:00:42.531144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.996 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.996 [2024-12-13 13:00:42.547744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.996 [2024-12-13 13:00:42.547838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.996 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.996 [2024-12-13 13:00:42.563523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.996 [2024-12-13 13:00:42.563574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.996 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.996 [2024-12-13 13:00:42.580557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.996 [2024-12-13 13:00:42.580607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.996 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.996 [2024-12-13 13:00:42.596713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.996 [2024-12-13 13:00:42.596788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.996 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.996 [2024-12-13 13:00:42.613384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.996 [2024-12-13 13:00:42.613436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.997 [2024-12-13 13:00:42.629699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.997 [2024-12-13 13:00:42.629774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.997 [2024-12-13 13:00:42.646206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.997 [2024-12-13 13:00:42.646256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.997 [2024-12-13 13:00:42.662865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.997 [2024-12-13 13:00:42.662915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.997 [2024-12-13 13:00:42.679927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.997 [2024-12-13 13:00:42.679977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.997 [2024-12-13 13:00:42.697437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.997 [2024-12-13 13:00:42.697487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.997 [2024-12-13 13:00:42.713578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.997 [2024-12-13 13:00:42.713627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.997 [2024-12-13 13:00:42.730476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.997 [2024-12-13 13:00:42.730527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.997 [2024-12-13 13:00:42.747599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.997 [2024-12-13 13:00:42.747646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.997 [2024-12-13 13:00:42.762734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.997 [2024-12-13 13:00:42.762809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.997 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.774523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.774571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.790589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.790636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.806566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.806614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.823860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.823910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.839778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.839835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.856971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.857019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.873757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.873804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.889890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.889941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.906460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.906508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.922733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.922792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.940311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.940359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.956754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.956831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.973161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.973214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:42.990455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:42.990505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:43.006827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:43.006878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.256 [2024-12-13 13:00:43.023608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.256 [2024-12-13 13:00:43.023657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.256 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.515 [2024-12-13 13:00:43.038948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.515 [2024-12-13 13:00:43.039023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.515 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.515 [2024-12-13 13:00:43.049505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.515 [2024-12-13 13:00:43.049554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.515 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.065235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.065283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.082309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.082357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.098236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.098286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.115525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.115573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.132049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.132097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.149088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.149121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.165788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.165834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.182232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.182280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.199214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.199264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.215041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.215091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.232059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.232107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.248312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.248360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.264500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.264548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.516 [2024-12-13 13:00:43.282000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.516 [2024-12-13 13:00:43.282048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.516 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.297450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.297498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.313812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.313861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.330143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.330191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.347064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.347114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.363980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.364027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.380581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.380629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.396673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.396721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.412691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.412767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.429404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.429453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.445644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.445695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.463383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.775 [2024-12-13 13:00:43.463432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.775 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.775 [2024-12-13 13:00:43.479090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.776 [2024-12-13 13:00:43.479140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.776 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.776 [2024-12-13 13:00:43.495348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.776 [2024-12-13 13:00:43.495398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.776 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.776 [2024-12-13 13:00:43.512495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.776 [2024-12-13 13:00:43.512543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.776 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.776 [2024-12-13 13:00:43.528881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.776 [2024-12-13 13:00:43.528928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.776 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.776 [2024-12-13 13:00:43.545339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.776 [2024-12-13 13:00:43.545388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.776 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.560713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.560794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.576252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.576300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.587155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.587190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.603412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.603459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.620049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.620102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.636059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.636108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.652865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.652915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.669597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.669647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.686210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.686259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.702439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.702489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.719478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.719528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.734416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.734465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.749165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.749215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.765391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.765440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.782416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.782466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.035 [2024-12-13 13:00:43.799890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.035 [2024-12-13 13:00:43.799946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.035 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.814345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.814382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.831389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.831440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.845562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.845612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.862056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.862105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.872968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.873018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.888845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.888893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.899713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.899789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.915583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.915632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.932006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.932055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.948511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.948565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.964625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.964674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.981459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.981508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:43.997777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:43.997825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:44.013881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:44.013928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:44.030411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:44.030461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:44.046840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:44.046889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.295 [2024-12-13 13:00:44.063101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.295 [2024-12-13 13:00:44.063153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.295 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.080490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.080538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.097515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.097563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.114009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.114056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.130578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.130627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.146947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.147020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.163216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.163266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.174433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.174481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.190198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.190246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.207193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.207243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.223670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.223718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.239222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.554 [2024-12-13 13:00:44.239274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.554 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.554 [2024-12-13 13:00:44.250271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.555 [2024-12-13 13:00:44.250329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.555 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.555 [2024-12-13 13:00:44.266211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.555 [2024-12-13 13:00:44.266260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.555 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.555 [2024-12-13 13:00:44.282783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.555 [2024-12-13 13:00:44.282831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.555 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.555 [2024-12-13 13:00:44.299405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.555 [2024-12-13 13:00:44.299453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.555 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.555 [2024-12-13 13:00:44.315675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.555 [2024-12-13 13:00:44.315724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.555 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.332577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.332627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.347989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.348037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.363956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.364004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.380477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.380525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.396951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.396999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.413364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.413412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.430459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.430509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.446597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.446647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.463606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.463654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.479706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.479780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.490841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.490890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.506439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.506487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.523326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.523375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.539743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.539800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.814 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.814 [2024-12-13 13:00:44.556185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.814 [2024-12-13 13:00:44.556234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.815 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.815 [2024-12-13 13:00:44.573576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.815 [2024-12-13 13:00:44.573624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.815 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.815 [2024-12-13 13:00:44.588547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.815 [2024-12-13 13:00:44.588611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.604421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.604470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 00:16:04.074 Latency(us) 00:16:04.074 [2024-12-13T13:00:44.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.074 [2024-12-13T13:00:44.850Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:04.074 Nvme1n1 : 5.01 13470.55 105.24 0.00 0.00 9491.91 4051.32 22997.18 00:16:04.074 [2024-12-13T13:00:44.850Z] =================================================================================================================== 00:16:04.074 [2024-12-13T13:00:44.850Z] Total : 13470.55 105.24 0.00 0.00 9491.91 4051.32 22997.18 00:16:04.074 [2024-12-13 13:00:44.616420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.616467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.628418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.628464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.640438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.640489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.652442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.652492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.664442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.664493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.676451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.676502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.688437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.688487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.700455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.700507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.712456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.712506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.724478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.724529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.736479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.736534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.748452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.748496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.760472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.760523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.772462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.772508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.784471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.784520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.796468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.796512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 [2024-12-13 13:00:44.808457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.074 [2024-12-13 13:00:44.808497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.074 2024/12/13 13:00:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.074 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86006) - No such process 00:16:04.074 13:00:44 -- target/zcopy.sh@49 -- # wait 86006 00:16:04.074 13:00:44 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.074 13:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.074 13:00:44 -- common/autotest_common.sh@10 -- # set +x 00:16:04.074 13:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.074 13:00:44 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:04.074 13:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.074 13:00:44 -- common/autotest_common.sh@10 -- # set +x 00:16:04.074 delay0 00:16:04.074 13:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.074 13:00:44 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:04.074 13:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.074 13:00:44 -- common/autotest_common.sh@10 -- # set +x 00:16:04.074 13:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.074 13:00:44 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:04.333 [2024-12-13 13:00:44.993485] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:10.898 Initializing NVMe Controllers 00:16:10.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:10.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:10.898 Initialization complete. Launching workers. 00:16:10.898 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 85 00:16:10.898 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 33 00:16:10.898 success 195, unsuccess 177, failed 0 00:16:10.898 13:00:51 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:10.898 13:00:51 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:10.898 13:00:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:10.898 13:00:51 -- nvmf/common.sh@116 -- # sync 00:16:10.898 13:00:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:10.898 13:00:51 -- nvmf/common.sh@119 -- # set +e 00:16:10.898 13:00:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:10.898 13:00:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:10.898 rmmod nvme_tcp 00:16:10.898 rmmod nvme_fabrics 00:16:10.898 rmmod nvme_keyring 00:16:10.898 13:00:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:10.898 13:00:51 -- nvmf/common.sh@123 -- # set -e 00:16:10.898 13:00:51 -- nvmf/common.sh@124 -- # return 0 00:16:10.898 13:00:51 -- nvmf/common.sh@477 -- # '[' -n 85839 ']' 00:16:10.898 13:00:51 -- nvmf/common.sh@478 -- # killprocess 85839 00:16:10.898 13:00:51 -- common/autotest_common.sh@936 -- # '[' -z 85839 ']' 00:16:10.898 13:00:51 -- common/autotest_common.sh@940 -- # kill -0 85839 00:16:10.898 13:00:51 -- common/autotest_common.sh@941 -- # uname 00:16:10.898 13:00:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:10.898 13:00:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85839 00:16:10.898 13:00:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:10.898 13:00:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:10.898 killing process with pid 85839 00:16:10.898 13:00:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85839' 00:16:10.898 13:00:51 -- common/autotest_common.sh@955 -- # kill 85839 00:16:10.898 13:00:51 -- common/autotest_common.sh@960 -- # wait 85839 00:16:10.898 13:00:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:10.898 13:00:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:10.898 13:00:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:10.898 13:00:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.898 13:00:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:10.898 13:00:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.898 13:00:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.898 13:00:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.898 13:00:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:10.898 00:16:10.898 real 0m24.559s 00:16:10.898 user 0m39.789s 00:16:10.898 sys 0m6.453s 00:16:10.898 13:00:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:10.898 13:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:10.898 ************************************ 00:16:10.898 END TEST nvmf_zcopy 00:16:10.898 ************************************ 00:16:10.898 13:00:51 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:10.898 13:00:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:10.898 13:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:10.898 13:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:10.898 ************************************ 00:16:10.898 START TEST nvmf_nmic 00:16:10.898 ************************************ 00:16:10.898 13:00:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:10.898 * Looking for test storage... 00:16:10.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:10.898 13:00:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:10.898 13:00:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:10.898 13:00:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:10.898 13:00:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:10.898 13:00:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:10.898 13:00:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:10.898 13:00:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:10.898 13:00:51 -- scripts/common.sh@335 -- # IFS=.-: 00:16:10.898 13:00:51 -- scripts/common.sh@335 -- # read -ra ver1 00:16:10.898 13:00:51 -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.898 13:00:51 -- scripts/common.sh@336 -- # read -ra ver2 00:16:10.898 13:00:51 -- scripts/common.sh@337 -- # local 'op=<' 00:16:10.898 13:00:51 -- scripts/common.sh@339 -- # ver1_l=2 00:16:10.898 13:00:51 -- scripts/common.sh@340 -- # ver2_l=1 00:16:10.898 13:00:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:10.898 13:00:51 -- scripts/common.sh@343 -- # case "$op" in 00:16:10.898 13:00:51 -- scripts/common.sh@344 -- # : 1 00:16:10.898 13:00:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:10.898 13:00:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.898 13:00:51 -- scripts/common.sh@364 -- # decimal 1 00:16:10.898 13:00:51 -- scripts/common.sh@352 -- # local d=1 00:16:10.898 13:00:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.898 13:00:51 -- scripts/common.sh@354 -- # echo 1 00:16:10.898 13:00:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:10.898 13:00:51 -- scripts/common.sh@365 -- # decimal 2 00:16:10.898 13:00:51 -- scripts/common.sh@352 -- # local d=2 00:16:10.898 13:00:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.898 13:00:51 -- scripts/common.sh@354 -- # echo 2 00:16:10.898 13:00:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:10.898 13:00:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:10.898 13:00:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:10.898 13:00:51 -- scripts/common.sh@367 -- # return 0 00:16:10.898 13:00:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.898 13:00:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:10.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.898 --rc genhtml_branch_coverage=1 00:16:10.898 --rc genhtml_function_coverage=1 00:16:10.898 --rc genhtml_legend=1 00:16:10.898 --rc geninfo_all_blocks=1 00:16:10.898 --rc geninfo_unexecuted_blocks=1 00:16:10.898 00:16:10.898 ' 00:16:10.898 13:00:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:10.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.898 --rc genhtml_branch_coverage=1 00:16:10.898 --rc genhtml_function_coverage=1 00:16:10.898 --rc genhtml_legend=1 00:16:10.898 --rc geninfo_all_blocks=1 00:16:10.898 --rc geninfo_unexecuted_blocks=1 00:16:10.898 00:16:10.898 ' 00:16:10.898 13:00:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:10.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.898 --rc genhtml_branch_coverage=1 00:16:10.898 --rc genhtml_function_coverage=1 00:16:10.898 --rc genhtml_legend=1 00:16:10.898 --rc geninfo_all_blocks=1 00:16:10.898 --rc geninfo_unexecuted_blocks=1 00:16:10.898 00:16:10.898 ' 00:16:10.898 13:00:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:10.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.898 --rc genhtml_branch_coverage=1 00:16:10.898 --rc genhtml_function_coverage=1 00:16:10.898 --rc genhtml_legend=1 00:16:10.898 --rc geninfo_all_blocks=1 00:16:10.898 --rc geninfo_unexecuted_blocks=1 00:16:10.898 00:16:10.898 ' 00:16:10.898 13:00:51 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.898 13:00:51 -- nvmf/common.sh@7 -- # uname -s 00:16:10.898 13:00:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.898 13:00:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.898 13:00:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.898 13:00:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.898 13:00:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.898 13:00:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.898 13:00:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.898 13:00:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.898 13:00:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.898 13:00:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.158 13:00:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:11.158 13:00:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:11.158 13:00:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.158 13:00:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.158 13:00:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:11.158 13:00:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.158 13:00:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.158 13:00:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.158 13:00:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.158 13:00:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.158 13:00:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.158 13:00:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.158 13:00:51 -- paths/export.sh@5 -- # export PATH 00:16:11.158 13:00:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.158 13:00:51 -- nvmf/common.sh@46 -- # : 0 00:16:11.158 13:00:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:11.158 13:00:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:11.158 13:00:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:11.158 13:00:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.158 13:00:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.158 13:00:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:11.158 13:00:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:11.158 13:00:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:11.158 13:00:51 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.158 13:00:51 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.158 13:00:51 -- target/nmic.sh@14 -- # nvmftestinit 00:16:11.158 13:00:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:11.158 13:00:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.158 13:00:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:11.158 13:00:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:11.158 13:00:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:11.158 13:00:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.158 13:00:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.158 13:00:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.158 13:00:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:11.158 13:00:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:11.158 13:00:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:11.158 13:00:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:11.158 13:00:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:11.158 13:00:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:11.158 13:00:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.158 13:00:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.158 13:00:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:11.158 13:00:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:11.158 13:00:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:11.158 13:00:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:11.158 13:00:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:11.158 13:00:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.158 13:00:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:11.158 13:00:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:11.158 13:00:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:11.158 13:00:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:11.158 13:00:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:11.158 13:00:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:11.158 Cannot find device "nvmf_tgt_br" 00:16:11.158 13:00:51 -- nvmf/common.sh@154 -- # true 00:16:11.158 13:00:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.158 Cannot find device "nvmf_tgt_br2" 00:16:11.158 13:00:51 -- nvmf/common.sh@155 -- # true 00:16:11.158 13:00:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:11.158 13:00:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:11.158 Cannot find device "nvmf_tgt_br" 00:16:11.158 13:00:51 -- nvmf/common.sh@157 -- # true 00:16:11.158 13:00:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:11.158 Cannot find device "nvmf_tgt_br2" 00:16:11.158 13:00:51 -- nvmf/common.sh@158 -- # true 00:16:11.158 13:00:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:11.158 13:00:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:11.158 13:00:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.158 13:00:51 -- nvmf/common.sh@161 -- # true 00:16:11.158 13:00:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.159 13:00:51 -- nvmf/common.sh@162 -- # true 00:16:11.159 13:00:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.159 13:00:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.159 13:00:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.159 13:00:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.159 13:00:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.159 13:00:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.159 13:00:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.159 13:00:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:11.159 13:00:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:11.159 13:00:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:11.159 13:00:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:11.159 13:00:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:11.159 13:00:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:11.159 13:00:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.159 13:00:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.159 13:00:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.418 13:00:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:11.418 13:00:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:11.418 13:00:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.418 13:00:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.418 13:00:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.418 13:00:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.418 13:00:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.418 13:00:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:11.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:11.418 00:16:11.418 --- 10.0.0.2 ping statistics --- 00:16:11.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.418 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:11.418 13:00:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:11.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:11.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:11.418 00:16:11.418 --- 10.0.0.3 ping statistics --- 00:16:11.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.418 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:11.418 13:00:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:11.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:11.418 00:16:11.418 --- 10.0.0.1 ping statistics --- 00:16:11.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.418 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:11.418 13:00:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.418 13:00:52 -- nvmf/common.sh@421 -- # return 0 00:16:11.418 13:00:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:11.418 13:00:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.418 13:00:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:11.418 13:00:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:11.418 13:00:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.418 13:00:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:11.418 13:00:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:11.418 13:00:52 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:11.418 13:00:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:11.418 13:00:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.418 13:00:52 -- common/autotest_common.sh@10 -- # set +x 00:16:11.418 13:00:52 -- nvmf/common.sh@469 -- # nvmfpid=86335 00:16:11.418 13:00:52 -- nvmf/common.sh@470 -- # waitforlisten 86335 00:16:11.418 13:00:52 -- common/autotest_common.sh@829 -- # '[' -z 86335 ']' 00:16:11.418 13:00:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:11.418 13:00:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.418 13:00:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.418 13:00:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.418 13:00:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.418 13:00:52 -- common/autotest_common.sh@10 -- # set +x 00:16:11.418 [2024-12-13 13:00:52.094020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:11.418 [2024-12-13 13:00:52.094110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.677 [2024-12-13 13:00:52.235722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.677 [2024-12-13 13:00:52.304660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.677 [2024-12-13 13:00:52.304850] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.677 [2024-12-13 13:00:52.304869] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.677 [2024-12-13 13:00:52.304881] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.677 [2024-12-13 13:00:52.304973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.677 [2024-12-13 13:00:52.305325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.677 [2024-12-13 13:00:52.305654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.677 [2024-12-13 13:00:52.305686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.614 13:00:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.614 13:00:53 -- common/autotest_common.sh@862 -- # return 0 00:16:12.614 13:00:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:12.614 13:00:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 13:00:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.614 13:00:53 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:12.614 13:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 [2024-12-13 13:00:53.171340] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.614 13:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.614 13:00:53 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:12.614 13:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 Malloc0 00:16:12.614 13:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.614 13:00:53 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:12.614 13:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 13:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.614 13:00:53 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:12.614 13:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 13:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.614 13:00:53 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.614 13:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 [2024-12-13 13:00:53.237908] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.614 test case1: single bdev can't be used in multiple subsystems 00:16:12.614 13:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.614 13:00:53 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:12.614 13:00:53 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:12.614 13:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 13:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.614 13:00:53 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:12.614 13:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 13:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.614 13:00:53 -- target/nmic.sh@28 -- # nmic_status=0 00:16:12.614 13:00:53 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:12.614 13:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 [2024-12-13 13:00:53.261768] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:12.614 [2024-12-13 13:00:53.261818] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:12.614 [2024-12-13 13:00:53.261845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.614 2024/12/13 13:00:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.614 request: 00:16:12.614 { 00:16:12.614 "method": "nvmf_subsystem_add_ns", 00:16:12.614 "params": { 00:16:12.614 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:12.614 "namespace": { 00:16:12.614 "bdev_name": "Malloc0" 00:16:12.614 } 00:16:12.614 } 00:16:12.614 } 00:16:12.614 Got JSON-RPC error response 00:16:12.614 GoRPCClient: error on JSON-RPC call 00:16:12.614 Adding namespace failed - expected result. 00:16:12.614 test case2: host connect to nvmf target in multiple paths 00:16:12.614 13:00:53 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:12.614 13:00:53 -- target/nmic.sh@29 -- # nmic_status=1 00:16:12.614 13:00:53 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:12.614 13:00:53 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:12.614 13:00:53 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:12.614 13:00:53 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:12.614 13:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.614 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 [2024-12-13 13:00:53.273905] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:12.614 13:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.614 13:00:53 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:12.873 13:00:53 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:12.873 13:00:53 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.873 13:00:53 -- common/autotest_common.sh@1187 -- # local i=0 00:16:12.873 13:00:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.873 13:00:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:12.873 13:00:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:15.405 13:00:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:15.405 13:00:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:15.405 13:00:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.405 13:00:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:15.405 13:00:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.405 13:00:55 -- common/autotest_common.sh@1197 -- # return 0 00:16:15.405 13:00:55 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:15.405 [global] 00:16:15.405 thread=1 00:16:15.405 invalidate=1 00:16:15.405 rw=write 00:16:15.405 time_based=1 00:16:15.405 runtime=1 00:16:15.405 ioengine=libaio 00:16:15.405 direct=1 00:16:15.405 bs=4096 00:16:15.405 iodepth=1 00:16:15.405 norandommap=0 00:16:15.405 numjobs=1 00:16:15.405 00:16:15.405 verify_dump=1 00:16:15.405 verify_backlog=512 00:16:15.405 verify_state_save=0 00:16:15.405 do_verify=1 00:16:15.405 verify=crc32c-intel 00:16:15.405 [job0] 00:16:15.405 filename=/dev/nvme0n1 00:16:15.405 Could not set queue depth (nvme0n1) 00:16:15.405 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.405 fio-3.35 00:16:15.405 Starting 1 thread 00:16:16.341 00:16:16.341 job0: (groupid=0, jobs=1): err= 0: pid=86440: Fri Dec 13 13:00:56 2024 00:16:16.341 read: IOPS=3505, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1000msec) 00:16:16.341 slat (nsec): min=11819, max=61781, avg=14909.30, stdev=4750.77 00:16:16.341 clat (usec): min=113, max=382, avg=138.37, stdev=17.15 00:16:16.341 lat (usec): min=126, max=410, avg=153.28, stdev=18.14 00:16:16.341 clat percentiles (usec): 00:16:16.341 | 1.00th=[ 117], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 125], 00:16:16.341 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 139], 00:16:16.341 | 70.00th=[ 145], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 172], 00:16:16.341 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 215], 99.95th=[ 227], 00:16:16.341 | 99.99th=[ 383] 00:16:16.341 write: IOPS=3584, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1000msec); 0 zone resets 00:16:16.341 slat (nsec): min=18232, max=95959, avg=23687.86, stdev=7321.43 00:16:16.341 clat (usec): min=78, max=580, avg=101.95, stdev=18.59 00:16:16.341 lat (usec): min=101, max=604, avg=125.64, stdev=20.37 00:16:16.341 clat percentiles (usec): 00:16:16.341 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 91], 00:16:16.341 | 30.00th=[ 93], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 99], 00:16:16.341 | 70.00th=[ 104], 80.00th=[ 114], 90.00th=[ 125], 95.00th=[ 133], 00:16:16.341 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 172], 99.95th=[ 529], 00:16:16.341 | 99.99th=[ 578] 00:16:16.341 bw ( KiB/s): min=15872, max=15872, per=100.00%, avg=15872.00, stdev= 0.00, samples=1 00:16:16.341 iops : min= 3968, max= 3968, avg=3968.00, stdev= 0.00, samples=1 00:16:16.341 lat (usec) : 100=31.82%, 250=68.12%, 500=0.03%, 750=0.03% 00:16:16.341 cpu : usr=2.80%, sys=9.90%, ctx=7089, majf=0, minf=5 00:16:16.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.341 issued rwts: total=3505,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.341 00:16:16.341 Run status group 0 (all jobs): 00:16:16.341 READ: bw=13.7MiB/s (14.4MB/s), 13.7MiB/s-13.7MiB/s (14.4MB/s-14.4MB/s), io=13.7MiB (14.4MB), run=1000-1000msec 00:16:16.341 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1000-1000msec 00:16:16.341 00:16:16.341 Disk stats (read/write): 00:16:16.341 nvme0n1: ios=3122/3317, merge=0/0, ticks=471/363, in_queue=834, util=91.38% 00:16:16.341 13:00:56 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:16.604 13:00:57 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:16.604 13:00:57 -- common/autotest_common.sh@1208 -- # local i=0 00:16:16.604 13:00:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:16.604 13:00:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.604 13:00:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:16.604 13:00:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.604 13:00:57 -- common/autotest_common.sh@1220 -- # return 0 00:16:16.604 13:00:57 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:16.604 13:00:57 -- target/nmic.sh@53 -- # nvmftestfini 00:16:16.604 13:00:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:16.604 13:00:57 -- nvmf/common.sh@116 -- # sync 00:16:16.604 13:00:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:16.604 13:00:57 -- nvmf/common.sh@119 -- # set +e 00:16:16.604 13:00:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:16.605 13:00:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:16.605 rmmod nvme_tcp 00:16:16.605 rmmod nvme_fabrics 00:16:16.605 rmmod nvme_keyring 00:16:16.605 13:00:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:16.605 13:00:57 -- nvmf/common.sh@123 -- # set -e 00:16:16.605 13:00:57 -- nvmf/common.sh@124 -- # return 0 00:16:16.605 13:00:57 -- nvmf/common.sh@477 -- # '[' -n 86335 ']' 00:16:16.605 13:00:57 -- nvmf/common.sh@478 -- # killprocess 86335 00:16:16.605 13:00:57 -- common/autotest_common.sh@936 -- # '[' -z 86335 ']' 00:16:16.605 13:00:57 -- common/autotest_common.sh@940 -- # kill -0 86335 00:16:16.605 13:00:57 -- common/autotest_common.sh@941 -- # uname 00:16:16.605 13:00:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.605 13:00:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86335 00:16:16.863 killing process with pid 86335 00:16:16.863 13:00:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.863 13:00:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.863 13:00:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86335' 00:16:16.863 13:00:57 -- common/autotest_common.sh@955 -- # kill 86335 00:16:16.863 13:00:57 -- common/autotest_common.sh@960 -- # wait 86335 00:16:16.863 13:00:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:16.863 13:00:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:16.863 13:00:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:16.863 13:00:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.863 13:00:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:16.863 13:00:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.864 13:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.864 13:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.864 13:00:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:16.864 00:16:16.864 real 0m6.143s 00:16:16.864 user 0m20.824s 00:16:16.864 sys 0m1.470s 00:16:16.864 13:00:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:16.864 13:00:57 -- common/autotest_common.sh@10 -- # set +x 00:16:16.864 ************************************ 00:16:16.864 END TEST nvmf_nmic 00:16:16.864 ************************************ 00:16:17.123 13:00:57 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:17.123 13:00:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:17.123 13:00:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.123 13:00:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.123 ************************************ 00:16:17.123 START TEST nvmf_fio_target 00:16:17.123 ************************************ 00:16:17.123 13:00:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:17.123 * Looking for test storage... 00:16:17.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:17.123 13:00:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:17.123 13:00:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:17.123 13:00:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:17.123 13:00:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:17.123 13:00:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:17.123 13:00:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:17.123 13:00:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:17.123 13:00:57 -- scripts/common.sh@335 -- # IFS=.-: 00:16:17.123 13:00:57 -- scripts/common.sh@335 -- # read -ra ver1 00:16:17.123 13:00:57 -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.123 13:00:57 -- scripts/common.sh@336 -- # read -ra ver2 00:16:17.123 13:00:57 -- scripts/common.sh@337 -- # local 'op=<' 00:16:17.123 13:00:57 -- scripts/common.sh@339 -- # ver1_l=2 00:16:17.123 13:00:57 -- scripts/common.sh@340 -- # ver2_l=1 00:16:17.123 13:00:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:17.123 13:00:57 -- scripts/common.sh@343 -- # case "$op" in 00:16:17.123 13:00:57 -- scripts/common.sh@344 -- # : 1 00:16:17.123 13:00:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:17.123 13:00:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.123 13:00:57 -- scripts/common.sh@364 -- # decimal 1 00:16:17.123 13:00:57 -- scripts/common.sh@352 -- # local d=1 00:16:17.123 13:00:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.123 13:00:57 -- scripts/common.sh@354 -- # echo 1 00:16:17.123 13:00:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:17.123 13:00:57 -- scripts/common.sh@365 -- # decimal 2 00:16:17.123 13:00:57 -- scripts/common.sh@352 -- # local d=2 00:16:17.123 13:00:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.123 13:00:57 -- scripts/common.sh@354 -- # echo 2 00:16:17.123 13:00:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:17.123 13:00:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:17.123 13:00:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:17.123 13:00:57 -- scripts/common.sh@367 -- # return 0 00:16:17.123 13:00:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.123 13:00:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:17.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.123 --rc genhtml_branch_coverage=1 00:16:17.123 --rc genhtml_function_coverage=1 00:16:17.123 --rc genhtml_legend=1 00:16:17.123 --rc geninfo_all_blocks=1 00:16:17.123 --rc geninfo_unexecuted_blocks=1 00:16:17.123 00:16:17.123 ' 00:16:17.123 13:00:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:17.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.123 --rc genhtml_branch_coverage=1 00:16:17.123 --rc genhtml_function_coverage=1 00:16:17.123 --rc genhtml_legend=1 00:16:17.123 --rc geninfo_all_blocks=1 00:16:17.123 --rc geninfo_unexecuted_blocks=1 00:16:17.123 00:16:17.123 ' 00:16:17.123 13:00:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:17.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.123 --rc genhtml_branch_coverage=1 00:16:17.123 --rc genhtml_function_coverage=1 00:16:17.123 --rc genhtml_legend=1 00:16:17.123 --rc geninfo_all_blocks=1 00:16:17.123 --rc geninfo_unexecuted_blocks=1 00:16:17.123 00:16:17.123 ' 00:16:17.123 13:00:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:17.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.123 --rc genhtml_branch_coverage=1 00:16:17.123 --rc genhtml_function_coverage=1 00:16:17.123 --rc genhtml_legend=1 00:16:17.123 --rc geninfo_all_blocks=1 00:16:17.123 --rc geninfo_unexecuted_blocks=1 00:16:17.123 00:16:17.123 ' 00:16:17.123 13:00:57 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:17.123 13:00:57 -- nvmf/common.sh@7 -- # uname -s 00:16:17.123 13:00:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.123 13:00:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.123 13:00:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.123 13:00:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.123 13:00:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.123 13:00:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.123 13:00:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.123 13:00:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.123 13:00:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.123 13:00:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.123 13:00:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:17.123 13:00:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:17.123 13:00:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.123 13:00:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.123 13:00:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:17.123 13:00:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.123 13:00:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.123 13:00:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.123 13:00:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.123 13:00:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.123 13:00:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.123 13:00:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.123 13:00:57 -- paths/export.sh@5 -- # export PATH 00:16:17.123 13:00:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.123 13:00:57 -- nvmf/common.sh@46 -- # : 0 00:16:17.123 13:00:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:17.123 13:00:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:17.123 13:00:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:17.123 13:00:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.123 13:00:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.123 13:00:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:17.123 13:00:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:17.123 13:00:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:17.123 13:00:57 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.123 13:00:57 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.123 13:00:57 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:17.123 13:00:57 -- target/fio.sh@16 -- # nvmftestinit 00:16:17.123 13:00:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:17.123 13:00:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.123 13:00:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:17.123 13:00:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:17.123 13:00:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:17.123 13:00:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.123 13:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.123 13:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.123 13:00:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:17.123 13:00:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:17.123 13:00:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:17.123 13:00:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:17.123 13:00:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:17.123 13:00:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:17.123 13:00:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.123 13:00:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.123 13:00:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:17.123 13:00:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:17.123 13:00:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:17.123 13:00:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:17.123 13:00:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:17.123 13:00:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.123 13:00:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:17.123 13:00:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:17.123 13:00:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:17.124 13:00:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:17.124 13:00:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:17.124 13:00:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:17.124 Cannot find device "nvmf_tgt_br" 00:16:17.382 13:00:57 -- nvmf/common.sh@154 -- # true 00:16:17.382 13:00:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.382 Cannot find device "nvmf_tgt_br2" 00:16:17.382 13:00:57 -- nvmf/common.sh@155 -- # true 00:16:17.382 13:00:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:17.382 13:00:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:17.382 Cannot find device "nvmf_tgt_br" 00:16:17.382 13:00:57 -- nvmf/common.sh@157 -- # true 00:16:17.382 13:00:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:17.382 Cannot find device "nvmf_tgt_br2" 00:16:17.382 13:00:57 -- nvmf/common.sh@158 -- # true 00:16:17.382 13:00:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:17.382 13:00:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:17.382 13:00:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.382 13:00:57 -- nvmf/common.sh@161 -- # true 00:16:17.382 13:00:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.382 13:00:57 -- nvmf/common.sh@162 -- # true 00:16:17.382 13:00:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:17.382 13:00:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:17.382 13:00:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.382 13:00:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.382 13:00:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.382 13:00:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.382 13:00:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.382 13:00:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:17.382 13:00:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:17.382 13:00:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:17.382 13:00:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:17.382 13:00:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:17.382 13:00:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:17.382 13:00:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.382 13:00:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.382 13:00:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.382 13:00:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:17.382 13:00:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:17.382 13:00:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.382 13:00:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.382 13:00:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.382 13:00:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.382 13:00:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.642 13:00:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:17.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:16:17.642 00:16:17.642 --- 10.0.0.2 ping statistics --- 00:16:17.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.642 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:17.642 13:00:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:17.642 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.642 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:17.642 00:16:17.642 --- 10.0.0.3 ping statistics --- 00:16:17.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.642 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:17.642 13:00:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:17.642 00:16:17.642 --- 10.0.0.1 ping statistics --- 00:16:17.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.642 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:17.642 13:00:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.642 13:00:58 -- nvmf/common.sh@421 -- # return 0 00:16:17.642 13:00:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:17.642 13:00:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.642 13:00:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:17.642 13:00:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:17.642 13:00:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.642 13:00:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:17.642 13:00:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:17.642 13:00:58 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:17.642 13:00:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.642 13:00:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.642 13:00:58 -- common/autotest_common.sh@10 -- # set +x 00:16:17.642 13:00:58 -- nvmf/common.sh@469 -- # nvmfpid=86630 00:16:17.642 13:00:58 -- nvmf/common.sh@470 -- # waitforlisten 86630 00:16:17.642 13:00:58 -- common/autotest_common.sh@829 -- # '[' -z 86630 ']' 00:16:17.642 13:00:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:17.642 13:00:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.642 13:00:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.642 13:00:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.642 13:00:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.642 13:00:58 -- common/autotest_common.sh@10 -- # set +x 00:16:17.642 [2024-12-13 13:00:58.258648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:17.642 [2024-12-13 13:00:58.258768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.642 [2024-12-13 13:00:58.399049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.901 [2024-12-13 13:00:58.463378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.901 [2024-12-13 13:00:58.463553] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.901 [2024-12-13 13:00:58.463566] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.901 [2024-12-13 13:00:58.463573] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.901 [2024-12-13 13:00:58.463907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.901 [2024-12-13 13:00:58.463959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.902 [2024-12-13 13:00:58.464610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.902 [2024-12-13 13:00:58.464642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.491 13:00:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.491 13:00:59 -- common/autotest_common.sh@862 -- # return 0 00:16:18.491 13:00:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:18.491 13:00:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.491 13:00:59 -- common/autotest_common.sh@10 -- # set +x 00:16:18.491 13:00:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.491 13:00:59 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:18.763 [2024-12-13 13:00:59.494862] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.032 13:00:59 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:19.291 13:00:59 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:19.291 13:00:59 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:19.549 13:01:00 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:19.549 13:01:00 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:19.808 13:01:00 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:19.808 13:01:00 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:20.067 13:01:00 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:20.067 13:01:00 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:20.326 13:01:00 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:20.326 13:01:01 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:20.585 13:01:01 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:20.844 13:01:01 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:20.844 13:01:01 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.103 13:01:01 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:21.103 13:01:01 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:21.103 13:01:01 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:21.670 13:01:02 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:21.670 13:01:02 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.670 13:01:02 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:21.670 13:01:02 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:21.929 13:01:02 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.187 [2024-12-13 13:01:02.786599] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.187 13:01:02 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:22.446 13:01:03 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:22.705 13:01:03 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.705 13:01:03 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:22.705 13:01:03 -- common/autotest_common.sh@1187 -- # local i=0 00:16:22.705 13:01:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.705 13:01:03 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:22.705 13:01:03 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:22.705 13:01:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:25.236 13:01:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:25.236 13:01:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:25.236 13:01:05 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.236 13:01:05 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:25.236 13:01:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.236 13:01:05 -- common/autotest_common.sh@1197 -- # return 0 00:16:25.236 13:01:05 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:25.236 [global] 00:16:25.236 thread=1 00:16:25.236 invalidate=1 00:16:25.236 rw=write 00:16:25.236 time_based=1 00:16:25.236 runtime=1 00:16:25.236 ioengine=libaio 00:16:25.236 direct=1 00:16:25.236 bs=4096 00:16:25.236 iodepth=1 00:16:25.236 norandommap=0 00:16:25.236 numjobs=1 00:16:25.236 00:16:25.236 verify_dump=1 00:16:25.236 verify_backlog=512 00:16:25.236 verify_state_save=0 00:16:25.236 do_verify=1 00:16:25.236 verify=crc32c-intel 00:16:25.236 [job0] 00:16:25.236 filename=/dev/nvme0n1 00:16:25.236 [job1] 00:16:25.236 filename=/dev/nvme0n2 00:16:25.236 [job2] 00:16:25.236 filename=/dev/nvme0n3 00:16:25.236 [job3] 00:16:25.236 filename=/dev/nvme0n4 00:16:25.236 Could not set queue depth (nvme0n1) 00:16:25.236 Could not set queue depth (nvme0n2) 00:16:25.236 Could not set queue depth (nvme0n3) 00:16:25.236 Could not set queue depth (nvme0n4) 00:16:25.236 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.236 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.236 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.236 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.236 fio-3.35 00:16:25.236 Starting 4 threads 00:16:26.172 00:16:26.172 job0: (groupid=0, jobs=1): err= 0: pid=86923: Fri Dec 13 13:01:06 2024 00:16:26.172 read: IOPS=1722, BW=6889KiB/s (7054kB/s)(6896KiB/1001msec) 00:16:26.172 slat (nsec): min=11638, max=46387, avg=14124.63, stdev=3292.38 00:16:26.172 clat (usec): min=137, max=633, avg=261.76, stdev=25.60 00:16:26.172 lat (usec): min=153, max=648, avg=275.89, stdev=25.68 00:16:26.172 clat percentiles (usec): 00:16:26.172 | 1.00th=[ 178], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:16:26.172 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:16:26.172 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:16:26.172 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 400], 99.95th=[ 635], 00:16:26.172 | 99.99th=[ 635] 00:16:26.172 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:26.172 slat (nsec): min=18562, max=90210, avg=24734.95, stdev=6798.00 00:16:26.172 clat (usec): min=106, max=42124, avg=228.21, stdev=926.51 00:16:26.172 lat (usec): min=137, max=42145, avg=252.94, stdev=926.46 00:16:26.172 clat percentiles (usec): 00:16:26.172 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:16:26.172 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 210], 00:16:26.172 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 249], 00:16:26.172 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 293], 00:16:26.172 | 99.99th=[42206] 00:16:26.172 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:16:26.172 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:26.172 lat (usec) : 250=66.52%, 500=33.43%, 750=0.03% 00:16:26.172 lat (msec) : 50=0.03% 00:16:26.172 cpu : usr=1.50%, sys=5.50%, ctx=3772, majf=0, minf=7 00:16:26.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.172 issued rwts: total=1724,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:26.172 job1: (groupid=0, jobs=1): err= 0: pid=86924: Fri Dec 13 13:01:06 2024 00:16:26.172 read: IOPS=1749, BW=6997KiB/s (7165kB/s)(7004KiB/1001msec) 00:16:26.172 slat (nsec): min=8275, max=58402, avg=12821.78, stdev=4388.18 00:16:26.172 clat (usec): min=142, max=40305, avg=287.06, stdev=961.97 00:16:26.172 lat (usec): min=159, max=40322, avg=299.88, stdev=962.07 00:16:26.172 clat percentiles (usec): 00:16:26.172 | 1.00th=[ 206], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:16:26.172 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 265], 00:16:26.172 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:16:26.172 | 99.00th=[ 322], 99.50th=[ 351], 99.90th=[ 3621], 99.95th=[40109], 00:16:26.172 | 99.99th=[40109] 00:16:26.172 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:26.172 slat (nsec): min=16384, max=81633, avg=23889.44, stdev=6094.91 00:16:26.172 clat (usec): min=106, max=269, avg=204.55, stdev=16.79 00:16:26.172 lat (usec): min=128, max=305, avg=228.44, stdev=17.17 00:16:26.172 clat percentiles (usec): 00:16:26.172 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:16:26.172 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:16:26.172 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 227], 95.00th=[ 235], 00:16:26.172 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 262], 00:16:26.172 | 99.99th=[ 269] 00:16:26.172 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:16:26.172 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:26.172 lat (usec) : 250=68.68%, 500=31.22% 00:16:26.172 lat (msec) : 2=0.05%, 4=0.03%, 50=0.03% 00:16:26.172 cpu : usr=1.10%, sys=6.10%, ctx=3801, majf=0, minf=7 00:16:26.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.172 issued rwts: total=1751,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:26.172 job2: (groupid=0, jobs=1): err= 0: pid=86925: Fri Dec 13 13:01:06 2024 00:16:26.172 read: IOPS=2001, BW=8008KiB/s (8200kB/s)(8016KiB/1001msec) 00:16:26.172 slat (nsec): min=11521, max=57164, avg=15290.17, stdev=4624.97 00:16:26.172 clat (usec): min=165, max=3117, avg=260.22, stdev=67.07 00:16:26.172 lat (usec): min=177, max=3135, avg=275.51, stdev=67.35 00:16:26.172 clat percentiles (usec): 00:16:26.172 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:16:26.172 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:16:26.172 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:16:26.172 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 367], 99.95th=[ 404], 00:16:26.172 | 99.99th=[ 3130] 00:16:26.172 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:26.172 slat (nsec): min=18460, max=93103, avg=23937.94, stdev=7063.60 00:16:26.172 clat (usec): min=97, max=303, avg=191.29, stdev=32.27 00:16:26.172 lat (usec): min=116, max=397, avg=215.22, stdev=34.01 00:16:26.172 clat percentiles (usec): 00:16:26.172 | 1.00th=[ 108], 5.00th=[ 118], 10.00th=[ 131], 20.00th=[ 182], 00:16:26.172 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:16:26.172 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 231], 00:16:26.172 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 260], 99.95th=[ 269], 00:16:26.172 | 99.99th=[ 306] 00:16:26.172 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:16:26.172 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:26.172 lat (usec) : 100=0.07%, 250=68.76%, 500=31.15% 00:16:26.172 lat (msec) : 4=0.02% 00:16:26.172 cpu : usr=1.00%, sys=6.40%, ctx=4052, majf=0, minf=17 00:16:26.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.173 issued rwts: total=2004,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:26.173 job3: (groupid=0, jobs=1): err= 0: pid=86926: Fri Dec 13 13:01:06 2024 00:16:26.173 read: IOPS=1748, BW=6993KiB/s (7161kB/s)(7000KiB/1001msec) 00:16:26.173 slat (nsec): min=8002, max=57313, avg=13863.34, stdev=3973.54 00:16:26.173 clat (usec): min=159, max=40329, avg=286.37, stdev=962.77 00:16:26.173 lat (usec): min=175, max=40338, avg=300.24, stdev=962.67 00:16:26.173 clat percentiles (usec): 00:16:26.173 | 1.00th=[ 210], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:16:26.173 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:16:26.173 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 297], 00:16:26.173 | 99.00th=[ 318], 99.50th=[ 359], 99.90th=[ 3621], 99.95th=[40109], 00:16:26.173 | 99.99th=[40109] 00:16:26.173 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:26.173 slat (nsec): min=18811, max=70462, avg=23989.65, stdev=6155.52 00:16:26.173 clat (usec): min=157, max=263, avg=204.53, stdev=16.17 00:16:26.173 lat (usec): min=193, max=286, avg=228.52, stdev=16.25 00:16:26.173 clat percentiles (usec): 00:16:26.173 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 190], 00:16:26.173 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:16:26.173 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 227], 95.00th=[ 233], 00:16:26.173 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 258], 99.95th=[ 262], 00:16:26.173 | 99.99th=[ 265] 00:16:26.173 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:16:26.173 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:26.173 lat (usec) : 250=69.83%, 500=30.04%, 750=0.03% 00:16:26.173 lat (msec) : 2=0.05%, 4=0.03%, 50=0.03% 00:16:26.173 cpu : usr=1.60%, sys=5.40%, ctx=3802, majf=0, minf=7 00:16:26.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.173 issued rwts: total=1750,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:26.173 00:16:26.173 Run status group 0 (all jobs): 00:16:26.173 READ: bw=28.2MiB/s (29.6MB/s), 6889KiB/s-8008KiB/s (7054kB/s-8200kB/s), io=28.2MiB (29.6MB), run=1001-1001msec 00:16:26.173 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:16:26.173 00:16:26.173 Disk stats (read/write): 00:16:26.173 nvme0n1: ios=1586/1719, merge=0/0, ticks=448/417, in_queue=865, util=88.28% 00:16:26.173 nvme0n2: ios=1576/1692, merge=0/0, ticks=438/356, in_queue=794, util=88.04% 00:16:26.173 nvme0n3: ios=1536/2000, merge=0/0, ticks=406/404, in_queue=810, util=89.25% 00:16:26.173 nvme0n4: ios=1536/1690, merge=0/0, ticks=448/353, in_queue=801, util=89.70% 00:16:26.173 13:01:06 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:26.173 [global] 00:16:26.173 thread=1 00:16:26.173 invalidate=1 00:16:26.173 rw=randwrite 00:16:26.173 time_based=1 00:16:26.173 runtime=1 00:16:26.173 ioengine=libaio 00:16:26.173 direct=1 00:16:26.173 bs=4096 00:16:26.173 iodepth=1 00:16:26.173 norandommap=0 00:16:26.173 numjobs=1 00:16:26.173 00:16:26.173 verify_dump=1 00:16:26.173 verify_backlog=512 00:16:26.173 verify_state_save=0 00:16:26.173 do_verify=1 00:16:26.173 verify=crc32c-intel 00:16:26.173 [job0] 00:16:26.173 filename=/dev/nvme0n1 00:16:26.173 [job1] 00:16:26.173 filename=/dev/nvme0n2 00:16:26.173 [job2] 00:16:26.173 filename=/dev/nvme0n3 00:16:26.173 [job3] 00:16:26.173 filename=/dev/nvme0n4 00:16:26.173 Could not set queue depth (nvme0n1) 00:16:26.173 Could not set queue depth (nvme0n2) 00:16:26.173 Could not set queue depth (nvme0n3) 00:16:26.173 Could not set queue depth (nvme0n4) 00:16:26.432 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.432 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.432 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.432 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.432 fio-3.35 00:16:26.432 Starting 4 threads 00:16:27.809 00:16:27.809 job0: (groupid=0, jobs=1): err= 0: pid=86982: Fri Dec 13 13:01:08 2024 00:16:27.809 read: IOPS=1693, BW=6773KiB/s (6936kB/s)(6780KiB/1001msec) 00:16:27.809 slat (nsec): min=8143, max=64562, avg=14742.96, stdev=5364.54 00:16:27.809 clat (usec): min=190, max=7948, avg=277.45, stdev=200.99 00:16:27.809 lat (usec): min=202, max=7966, avg=292.20, stdev=201.47 00:16:27.809 clat percentiles (usec): 00:16:27.809 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:16:27.809 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 265], 00:16:27.809 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 367], 95.00th=[ 412], 00:16:27.809 | 99.00th=[ 461], 99.50th=[ 506], 99.90th=[ 1401], 99.95th=[ 7963], 00:16:27.809 | 99.99th=[ 7963] 00:16:27.809 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:27.809 slat (usec): min=10, max=110, avg=23.37, stdev=10.70 00:16:27.809 clat (usec): min=97, max=693, avg=219.90, stdev=48.45 00:16:27.809 lat (usec): min=113, max=714, avg=243.27, stdev=53.45 00:16:27.809 clat percentiles (usec): 00:16:27.809 | 1.00th=[ 105], 5.00th=[ 122], 10.00th=[ 149], 20.00th=[ 186], 00:16:27.809 | 30.00th=[ 202], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 239], 00:16:27.809 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 281], 00:16:27.809 | 99.00th=[ 310], 99.50th=[ 314], 99.90th=[ 445], 99.95th=[ 545], 00:16:27.809 | 99.99th=[ 693] 00:16:27.809 bw ( KiB/s): min= 8192, max= 8192, per=24.97%, avg=8192.00, stdev= 0.00, samples=1 00:16:27.809 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:27.809 lat (usec) : 100=0.05%, 250=60.91%, 500=38.74%, 750=0.16%, 1000=0.03% 00:16:27.809 lat (msec) : 2=0.08%, 10=0.03% 00:16:27.809 cpu : usr=1.00%, sys=6.00%, ctx=3744, majf=0, minf=7 00:16:27.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.809 issued rwts: total=1695,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.809 job1: (groupid=0, jobs=1): err= 0: pid=86985: Fri Dec 13 13:01:08 2024 00:16:27.809 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:27.809 slat (nsec): min=9647, max=43415, avg=12050.13, stdev=3638.63 00:16:27.809 clat (usec): min=206, max=613, avg=328.68, stdev=100.46 00:16:27.809 lat (usec): min=226, max=625, avg=340.73, stdev=101.54 00:16:27.809 clat percentiles (usec): 00:16:27.809 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 243], 00:16:27.809 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 293], 00:16:27.809 | 70.00th=[ 420], 80.00th=[ 453], 90.00th=[ 482], 95.00th=[ 498], 00:16:27.809 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 586], 99.95th=[ 611], 00:16:27.809 | 99.99th=[ 611] 00:16:27.809 write: IOPS=1756, BW=7025KiB/s (7194kB/s)(7032KiB/1001msec); 0 zone resets 00:16:27.809 slat (nsec): min=13326, max=79968, avg=21430.77, stdev=6452.23 00:16:27.809 clat (usec): min=122, max=433, avg=246.45, stdev=44.79 00:16:27.809 lat (usec): min=143, max=453, avg=267.88, stdev=45.39 00:16:27.809 clat percentiles (usec): 00:16:27.809 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 00:16:27.809 | 30.00th=[ 212], 40.00th=[ 229], 50.00th=[ 253], 60.00th=[ 265], 00:16:27.809 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 314], 00:16:27.809 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 392], 99.95th=[ 433], 00:16:27.809 | 99.99th=[ 433] 00:16:27.809 bw ( KiB/s): min= 6888, max= 6888, per=21.00%, avg=6888.00, stdev= 0.00, samples=1 00:16:27.809 iops : min= 1722, max= 1722, avg=1722.00, stdev= 0.00, samples=1 00:16:27.809 lat (usec) : 250=38.25%, 500=59.81%, 750=1.94% 00:16:27.809 cpu : usr=1.20%, sys=4.50%, ctx=3297, majf=0, minf=12 00:16:27.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.809 issued rwts: total=1536,1758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.809 job2: (groupid=0, jobs=1): err= 0: pid=86987: Fri Dec 13 13:01:08 2024 00:16:27.809 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:27.809 slat (nsec): min=9640, max=66722, avg=14800.63, stdev=4191.49 00:16:27.809 clat (usec): min=132, max=1935, avg=197.17, stdev=63.68 00:16:27.809 lat (usec): min=144, max=1948, avg=211.97, stdev=63.21 00:16:27.809 clat percentiles (usec): 00:16:27.809 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:16:27.809 | 30.00th=[ 155], 40.00th=[ 163], 50.00th=[ 176], 60.00th=[ 204], 00:16:27.809 | 70.00th=[ 227], 80.00th=[ 245], 90.00th=[ 269], 95.00th=[ 297], 00:16:27.809 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 693], 99.95th=[ 701], 00:16:27.809 | 99.99th=[ 1942] 00:16:27.809 write: IOPS=2642, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:16:27.809 slat (usec): min=10, max=117, avg=22.78, stdev= 6.81 00:16:27.809 clat (usec): min=97, max=647, avg=146.85, stdev=49.79 00:16:27.809 lat (usec): min=120, max=659, avg=169.63, stdev=47.58 00:16:27.809 clat percentiles (usec): 00:16:27.809 | 1.00th=[ 105], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 114], 00:16:27.809 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 126], 60.00th=[ 135], 00:16:27.809 | 70.00th=[ 145], 80.00th=[ 176], 90.00th=[ 235], 95.00th=[ 251], 00:16:27.809 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 474], 99.95th=[ 627], 00:16:27.809 | 99.99th=[ 652] 00:16:27.809 bw ( KiB/s): min=12288, max=12288, per=37.46%, avg=12288.00, stdev= 0.00, samples=1 00:16:27.809 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:27.809 lat (usec) : 100=0.04%, 250=88.47%, 500=11.39%, 750=0.08% 00:16:27.809 lat (msec) : 2=0.02% 00:16:27.809 cpu : usr=1.70%, sys=7.70%, ctx=5206, majf=0, minf=17 00:16:27.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.809 issued rwts: total=2560,2645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.809 job3: (groupid=0, jobs=1): err= 0: pid=86988: Fri Dec 13 13:01:08 2024 00:16:27.809 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:27.809 slat (nsec): min=10232, max=46330, avg=13658.00, stdev=4088.29 00:16:27.809 clat (usec): min=212, max=613, avg=326.99, stdev=101.37 00:16:27.809 lat (usec): min=225, max=625, avg=340.65, stdev=101.52 00:16:27.809 clat percentiles (usec): 00:16:27.809 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:16:27.809 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 289], 00:16:27.809 | 70.00th=[ 420], 80.00th=[ 453], 90.00th=[ 482], 95.00th=[ 502], 00:16:27.809 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 562], 99.95th=[ 611], 00:16:27.809 | 99.99th=[ 611] 00:16:27.809 write: IOPS=1757, BW=7029KiB/s (7198kB/s)(7036KiB/1001msec); 0 zone resets 00:16:27.809 slat (nsec): min=14719, max=74051, avg=22840.36, stdev=7260.78 00:16:27.809 clat (usec): min=103, max=413, avg=244.96, stdev=43.51 00:16:27.809 lat (usec): min=127, max=441, avg=267.80, stdev=45.27 00:16:27.809 clat percentiles (usec): 00:16:27.809 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 198], 00:16:27.809 | 30.00th=[ 212], 40.00th=[ 229], 50.00th=[ 249], 60.00th=[ 265], 00:16:27.809 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 310], 00:16:27.809 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 375], 99.95th=[ 412], 00:16:27.809 | 99.99th=[ 412] 00:16:27.809 bw ( KiB/s): min= 6880, max= 6880, per=20.97%, avg=6880.00, stdev= 0.00, samples=1 00:16:27.809 iops : min= 1720, max= 1720, avg=1720.00, stdev= 0.00, samples=1 00:16:27.809 lat (usec) : 250=39.76%, 500=57.72%, 750=2.52% 00:16:27.809 cpu : usr=1.40%, sys=4.60%, ctx=3295, majf=0, minf=9 00:16:27.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.810 issued rwts: total=1536,1759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.810 00:16:27.810 Run status group 0 (all jobs): 00:16:27.810 READ: bw=28.6MiB/s (30.0MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:16:27.810 WRITE: bw=32.0MiB/s (33.6MB/s), 7025KiB/s-10.3MiB/s (7194kB/s-10.8MB/s), io=32.1MiB (33.6MB), run=1001-1001msec 00:16:27.810 00:16:27.810 Disk stats (read/write): 00:16:27.810 nvme0n1: ios=1586/1620, merge=0/0, ticks=447/379, in_queue=826, util=87.88% 00:16:27.810 nvme0n2: ios=1256/1536, merge=0/0, ticks=431/398, in_queue=829, util=88.97% 00:16:27.810 nvme0n3: ios=2111/2560, merge=0/0, ticks=408/376, in_queue=784, util=89.23% 00:16:27.810 nvme0n4: ios=1207/1536, merge=0/0, ticks=421/402, in_queue=823, util=89.79% 00:16:27.810 13:01:08 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:27.810 [global] 00:16:27.810 thread=1 00:16:27.810 invalidate=1 00:16:27.810 rw=write 00:16:27.810 time_based=1 00:16:27.810 runtime=1 00:16:27.810 ioengine=libaio 00:16:27.810 direct=1 00:16:27.810 bs=4096 00:16:27.810 iodepth=128 00:16:27.810 norandommap=0 00:16:27.810 numjobs=1 00:16:27.810 00:16:27.810 verify_dump=1 00:16:27.810 verify_backlog=512 00:16:27.810 verify_state_save=0 00:16:27.810 do_verify=1 00:16:27.810 verify=crc32c-intel 00:16:27.810 [job0] 00:16:27.810 filename=/dev/nvme0n1 00:16:27.810 [job1] 00:16:27.810 filename=/dev/nvme0n2 00:16:27.810 [job2] 00:16:27.810 filename=/dev/nvme0n3 00:16:27.810 [job3] 00:16:27.810 filename=/dev/nvme0n4 00:16:27.810 Could not set queue depth (nvme0n1) 00:16:27.810 Could not set queue depth (nvme0n2) 00:16:27.810 Could not set queue depth (nvme0n3) 00:16:27.810 Could not set queue depth (nvme0n4) 00:16:27.810 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.810 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.810 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.810 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.810 fio-3.35 00:16:27.810 Starting 4 threads 00:16:29.187 00:16:29.187 job0: (groupid=0, jobs=1): err= 0: pid=87043: Fri Dec 13 13:01:09 2024 00:16:29.187 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:16:29.187 slat (usec): min=7, max=3322, avg=82.24, stdev=380.71 00:16:29.187 clat (usec): min=7794, max=14262, avg=10827.91, stdev=1173.58 00:16:29.187 lat (usec): min=7803, max=14431, avg=10910.15, stdev=1152.25 00:16:29.187 clat percentiles (usec): 00:16:29.187 | 1.00th=[ 8225], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9765], 00:16:29.187 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:16:29.187 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12518], 00:16:29.187 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13566], 99.95th=[13566], 00:16:29.187 | 99.99th=[14222] 00:16:29.187 write: IOPS=5984, BW=23.4MiB/s (24.5MB/s)(23.4MiB/1002msec); 0 zone resets 00:16:29.187 slat (usec): min=11, max=3409, avg=82.82, stdev=343.12 00:16:29.187 clat (usec): min=267, max=14354, avg=10955.43, stdev=1345.19 00:16:29.187 lat (usec): min=2933, max=14381, avg=11038.26, stdev=1320.20 00:16:29.187 clat percentiles (usec): 00:16:29.187 | 1.00th=[ 7308], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10552], 00:16:29.187 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:16:29.187 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:16:29.187 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14353], 99.95th=[14353], 00:16:29.187 | 99.99th=[14353] 00:16:29.187 bw ( KiB/s): min=22720, max=24272, per=34.73%, avg=23496.00, stdev=1097.43, samples=2 00:16:29.187 iops : min= 5680, max= 6068, avg=5874.00, stdev=274.36, samples=2 00:16:29.187 lat (usec) : 500=0.01% 00:16:29.187 lat (msec) : 4=0.35%, 10=19.99%, 20=79.65% 00:16:29.187 cpu : usr=4.80%, sys=14.89%, ctx=801, majf=0, minf=15 00:16:29.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:29.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.187 issued rwts: total=5632,5996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.187 job1: (groupid=0, jobs=1): err= 0: pid=87044: Fri Dec 13 13:01:09 2024 00:16:29.187 read: IOPS=2845, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1006msec) 00:16:29.187 slat (usec): min=3, max=10966, avg=186.07, stdev=844.66 00:16:29.187 clat (usec): min=329, max=34957, avg=23318.17, stdev=4163.83 00:16:29.187 lat (usec): min=6330, max=35001, avg=23504.25, stdev=4200.92 00:16:29.187 clat percentiles (usec): 00:16:29.187 | 1.00th=[ 7177], 5.00th=[17171], 10.00th=[18744], 20.00th=[20317], 00:16:29.187 | 30.00th=[21103], 40.00th=[22676], 50.00th=[23725], 60.00th=[24773], 00:16:29.187 | 70.00th=[25822], 80.00th=[26608], 90.00th=[28443], 95.00th=[29230], 00:16:29.187 | 99.00th=[31327], 99.50th=[31851], 99.90th=[34341], 99.95th=[34866], 00:16:29.187 | 99.99th=[34866] 00:16:29.187 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:16:29.187 slat (usec): min=4, max=6498, avg=146.37, stdev=640.17 00:16:29.187 clat (usec): min=13054, max=26964, avg=19533.18, stdev=2848.99 00:16:29.187 lat (usec): min=13079, max=27557, avg=19679.55, stdev=2869.55 00:16:29.187 clat percentiles (usec): 00:16:29.187 | 1.00th=[13435], 5.00th=[14353], 10.00th=[15664], 20.00th=[16909], 00:16:29.187 | 30.00th=[18744], 40.00th=[19268], 50.00th=[19530], 60.00th=[20055], 00:16:29.187 | 70.00th=[20579], 80.00th=[21890], 90.00th=[23200], 95.00th=[24511], 00:16:29.187 | 99.00th=[26346], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:16:29.187 | 99.99th=[26870] 00:16:29.187 bw ( KiB/s): min=12288, max=12312, per=18.18%, avg=12300.00, stdev=16.97, samples=2 00:16:29.187 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:16:29.187 lat (usec) : 500=0.02% 00:16:29.187 lat (msec) : 10=0.71%, 20=37.84%, 50=61.43% 00:16:29.187 cpu : usr=2.79%, sys=8.66%, ctx=674, majf=0, minf=8 00:16:29.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:29.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.187 issued rwts: total=2863,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.188 job2: (groupid=0, jobs=1): err= 0: pid=87045: Fri Dec 13 13:01:09 2024 00:16:29.188 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:16:29.188 slat (usec): min=7, max=5609, avg=92.58, stdev=520.95 00:16:29.188 clat (usec): min=8094, max=19198, avg=12295.23, stdev=1109.46 00:16:29.188 lat (usec): min=8116, max=19209, avg=12387.81, stdev=1177.26 00:16:29.188 clat percentiles (usec): 00:16:29.188 | 1.00th=[ 8586], 5.00th=[10814], 10.00th=[11469], 20.00th=[11731], 00:16:29.188 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:16:29.188 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13435], 95.00th=[14091], 00:16:29.188 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:16:29.188 | 99.99th=[19268] 00:16:29.188 write: IOPS=5261, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1005msec); 0 zone resets 00:16:29.188 slat (usec): min=10, max=5296, avg=92.89, stdev=531.71 00:16:29.188 clat (usec): min=355, max=18079, avg=12127.68, stdev=1625.81 00:16:29.188 lat (usec): min=4983, max=18098, avg=12220.56, stdev=1592.26 00:16:29.188 clat percentiles (usec): 00:16:29.188 | 1.00th=[ 5997], 5.00th=[ 8160], 10.00th=[10159], 20.00th=[11731], 00:16:29.188 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:16:29.188 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13566], 00:16:29.188 | 99.00th=[15008], 99.50th=[16581], 99.90th=[17957], 99.95th=[17957], 00:16:29.188 | 99.99th=[17957] 00:16:29.188 bw ( KiB/s): min=20521, max=20800, per=30.54%, avg=20660.50, stdev=197.28, samples=2 00:16:29.188 iops : min= 5130, max= 5200, avg=5165.00, stdev=49.50, samples=2 00:16:29.188 lat (usec) : 500=0.01% 00:16:29.188 lat (msec) : 10=6.27%, 20=93.72% 00:16:29.188 cpu : usr=4.58%, sys=13.84%, ctx=370, majf=0, minf=7 00:16:29.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:29.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.188 issued rwts: total=5120,5288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.188 job3: (groupid=0, jobs=1): err= 0: pid=87046: Fri Dec 13 13:01:09 2024 00:16:29.188 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:16:29.188 slat (usec): min=2, max=9732, avg=210.63, stdev=963.41 00:16:29.188 clat (usec): min=15836, max=40728, avg=26646.52, stdev=4730.91 00:16:29.188 lat (usec): min=15844, max=40765, avg=26857.15, stdev=4757.40 00:16:29.188 clat percentiles (usec): 00:16:29.188 | 1.00th=[17171], 5.00th=[19006], 10.00th=[20841], 20.00th=[22938], 00:16:29.188 | 30.00th=[24511], 40.00th=[25297], 50.00th=[26608], 60.00th=[27657], 00:16:29.188 | 70.00th=[28181], 80.00th=[29754], 90.00th=[32375], 95.00th=[36439], 00:16:29.188 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:16:29.188 | 99.99th=[40633] 00:16:29.188 write: IOPS=2658, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1007msec); 0 zone resets 00:16:29.188 slat (usec): min=4, max=10425, avg=164.56, stdev=733.58 00:16:29.188 clat (usec): min=5954, max=31595, avg=21898.74, stdev=3644.94 00:16:29.188 lat (usec): min=5976, max=31615, avg=22063.31, stdev=3611.06 00:16:29.188 clat percentiles (usec): 00:16:29.188 | 1.00th=[12518], 5.00th=[16450], 10.00th=[17433], 20.00th=[19006], 00:16:29.188 | 30.00th=[19792], 40.00th=[20579], 50.00th=[21627], 60.00th=[22938], 00:16:29.188 | 70.00th=[23987], 80.00th=[25560], 90.00th=[26608], 95.00th=[27132], 00:16:29.188 | 99.00th=[27919], 99.50th=[28967], 99.90th=[31589], 99.95th=[31589], 00:16:29.188 | 99.99th=[31589] 00:16:29.188 bw ( KiB/s): min= 9352, max=11168, per=15.16%, avg=10260.00, stdev=1284.11, samples=2 00:16:29.188 iops : min= 2338, max= 2792, avg=2565.00, stdev=321.03, samples=2 00:16:29.188 lat (msec) : 10=0.50%, 20=19.36%, 50=80.14% 00:16:29.188 cpu : usr=2.19%, sys=7.16%, ctx=712, majf=0, minf=15 00:16:29.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:29.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.188 issued rwts: total=2560,2677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.188 00:16:29.188 Run status group 0 (all jobs): 00:16:29.188 READ: bw=62.7MiB/s (65.8MB/s), 9.93MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=63.2MiB (66.3MB), run=1002-1007msec 00:16:29.188 WRITE: bw=66.1MiB/s (69.3MB/s), 10.4MiB/s-23.4MiB/s (10.9MB/s-24.5MB/s), io=66.5MiB (69.8MB), run=1002-1007msec 00:16:29.188 00:16:29.188 Disk stats (read/write): 00:16:29.188 nvme0n1: ios=4896/5120, merge=0/0, ticks=16068/16595, in_queue=32663, util=88.18% 00:16:29.188 nvme0n2: ios=2555/2560, merge=0/0, ticks=19158/14372, in_queue=33530, util=89.90% 00:16:29.188 nvme0n3: ios=4307/4608, merge=0/0, ticks=24348/23915, in_queue=48263, util=89.29% 00:16:29.188 nvme0n4: ios=2048/2474, merge=0/0, ticks=16285/14362, in_queue=30647, util=89.34% 00:16:29.188 13:01:09 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:29.188 [global] 00:16:29.188 thread=1 00:16:29.188 invalidate=1 00:16:29.188 rw=randwrite 00:16:29.188 time_based=1 00:16:29.188 runtime=1 00:16:29.188 ioengine=libaio 00:16:29.188 direct=1 00:16:29.188 bs=4096 00:16:29.188 iodepth=128 00:16:29.188 norandommap=0 00:16:29.188 numjobs=1 00:16:29.188 00:16:29.188 verify_dump=1 00:16:29.188 verify_backlog=512 00:16:29.188 verify_state_save=0 00:16:29.188 do_verify=1 00:16:29.188 verify=crc32c-intel 00:16:29.188 [job0] 00:16:29.188 filename=/dev/nvme0n1 00:16:29.188 [job1] 00:16:29.188 filename=/dev/nvme0n2 00:16:29.188 [job2] 00:16:29.188 filename=/dev/nvme0n3 00:16:29.188 [job3] 00:16:29.188 filename=/dev/nvme0n4 00:16:29.188 Could not set queue depth (nvme0n1) 00:16:29.188 Could not set queue depth (nvme0n2) 00:16:29.188 Could not set queue depth (nvme0n3) 00:16:29.188 Could not set queue depth (nvme0n4) 00:16:29.188 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.188 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.188 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.188 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.188 fio-3.35 00:16:29.188 Starting 4 threads 00:16:30.566 00:16:30.566 job0: (groupid=0, jobs=1): err= 0: pid=87099: Fri Dec 13 13:01:10 2024 00:16:30.566 read: IOPS=2587, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1006msec) 00:16:30.566 slat (usec): min=2, max=17283, avg=196.80, stdev=997.89 00:16:30.566 clat (usec): min=3626, max=55266, avg=23744.39, stdev=9300.67 00:16:30.566 lat (usec): min=7019, max=55303, avg=23941.19, stdev=9379.23 00:16:30.566 clat percentiles (usec): 00:16:30.566 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[10421], 20.00th=[13042], 00:16:30.566 | 30.00th=[19792], 40.00th=[21365], 50.00th=[23725], 60.00th=[25822], 00:16:30.566 | 70.00th=[27919], 80.00th=[31589], 90.00th=[35390], 95.00th=[39060], 00:16:30.566 | 99.00th=[48497], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:16:30.566 | 99.99th=[55313] 00:16:30.566 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:16:30.566 slat (usec): min=4, max=12698, avg=152.97, stdev=694.43 00:16:30.566 clat (usec): min=6419, max=51055, avg=21315.72, stdev=8687.33 00:16:30.566 lat (usec): min=6437, max=51068, avg=21468.69, stdev=8727.48 00:16:30.566 clat percentiles (usec): 00:16:30.566 | 1.00th=[ 7242], 5.00th=[10421], 10.00th=[11076], 20.00th=[13304], 00:16:30.566 | 30.00th=[17695], 40.00th=[18744], 50.00th=[20317], 60.00th=[21627], 00:16:30.566 | 70.00th=[22938], 80.00th=[25822], 90.00th=[31327], 95.00th=[41681], 00:16:30.566 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:16:30.566 | 99.99th=[51119] 00:16:30.566 bw ( KiB/s): min=11080, max=12816, per=26.91%, avg=11948.00, stdev=1227.54, samples=2 00:16:30.566 iops : min= 2770, max= 3204, avg=2987.00, stdev=306.88, samples=2 00:16:30.566 lat (msec) : 4=0.02%, 10=2.54%, 20=37.60%, 50=59.19%, 100=0.65% 00:16:30.566 cpu : usr=1.89%, sys=7.36%, ctx=753, majf=0, minf=9 00:16:30.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:30.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.566 issued rwts: total=2603,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.566 job1: (groupid=0, jobs=1): err= 0: pid=87100: Fri Dec 13 13:01:10 2024 00:16:30.566 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:16:30.566 slat (usec): min=2, max=11512, avg=195.08, stdev=957.07 00:16:30.566 clat (usec): min=10286, max=56075, avg=25861.11, stdev=8084.19 00:16:30.566 lat (usec): min=10300, max=56110, avg=26056.19, stdev=8162.64 00:16:30.566 clat percentiles (usec): 00:16:30.566 | 1.00th=[10683], 5.00th=[14615], 10.00th=[15664], 20.00th=[19792], 00:16:30.566 | 30.00th=[21890], 40.00th=[23462], 50.00th=[24773], 60.00th=[27132], 00:16:30.566 | 70.00th=[28443], 80.00th=[30802], 90.00th=[36439], 95.00th=[42206], 00:16:30.566 | 99.00th=[50070], 99.50th=[50070], 99.90th=[52167], 99.95th=[52691], 00:16:30.566 | 99.99th=[55837] 00:16:30.566 write: IOPS=2623, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1006msec); 0 zone resets 00:16:30.566 slat (usec): min=3, max=20474, avg=182.96, stdev=957.84 00:16:30.566 clat (usec): min=3374, max=57967, avg=23263.92, stdev=9402.99 00:16:30.566 lat (usec): min=3395, max=57987, avg=23446.88, stdev=9466.95 00:16:30.566 clat percentiles (usec): 00:16:30.566 | 1.00th=[ 6259], 5.00th=[ 9241], 10.00th=[12649], 20.00th=[15533], 00:16:30.566 | 30.00th=[19530], 40.00th=[21890], 50.00th=[22676], 60.00th=[23987], 00:16:30.566 | 70.00th=[25297], 80.00th=[27395], 90.00th=[34341], 95.00th=[42730], 00:16:30.566 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57934], 99.95th=[57934], 00:16:30.566 | 99.99th=[57934] 00:16:30.566 bw ( KiB/s): min= 8192, max=12288, per=23.06%, avg=10240.00, stdev=2896.31, samples=2 00:16:30.566 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:16:30.566 lat (msec) : 4=0.12%, 10=3.54%, 20=22.35%, 50=71.80%, 100=2.19% 00:16:30.566 cpu : usr=1.79%, sys=6.77%, ctx=668, majf=0, minf=9 00:16:30.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:30.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.566 issued rwts: total=2560,2639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.566 job2: (groupid=0, jobs=1): err= 0: pid=87101: Fri Dec 13 13:01:10 2024 00:16:30.566 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:16:30.566 slat (usec): min=3, max=10842, avg=177.90, stdev=935.30 00:16:30.566 clat (usec): min=10311, max=41549, avg=23268.75, stdev=5688.71 00:16:30.566 lat (usec): min=10319, max=41584, avg=23446.65, stdev=5778.52 00:16:30.566 clat percentiles (usec): 00:16:30.566 | 1.00th=[10290], 5.00th=[11469], 10.00th=[14222], 20.00th=[20841], 00:16:30.566 | 30.00th=[21627], 40.00th=[22676], 50.00th=[23725], 60.00th=[24773], 00:16:30.566 | 70.00th=[25822], 80.00th=[27132], 90.00th=[29754], 95.00th=[32637], 00:16:30.566 | 99.00th=[37487], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:16:30.566 | 99.99th=[41681] 00:16:30.566 write: IOPS=2880, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1005msec); 0 zone resets 00:16:30.566 slat (usec): min=5, max=11856, avg=182.27, stdev=878.62 00:16:30.566 clat (usec): min=3464, max=50289, avg=23382.40, stdev=8640.97 00:16:30.566 lat (usec): min=4032, max=50304, avg=23564.67, stdev=8691.04 00:16:30.566 clat percentiles (usec): 00:16:30.566 | 1.00th=[ 7046], 5.00th=[ 8455], 10.00th=[13829], 20.00th=[15139], 00:16:30.566 | 30.00th=[19268], 40.00th=[21103], 50.00th=[22676], 60.00th=[24511], 00:16:30.566 | 70.00th=[26346], 80.00th=[29492], 90.00th=[35914], 95.00th=[40633], 00:16:30.566 | 99.00th=[46924], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:16:30.566 | 99.99th=[50070] 00:16:30.566 bw ( KiB/s): min= 9856, max=12288, per=24.94%, avg=11072.00, stdev=1719.68, samples=2 00:16:30.566 iops : min= 2464, max= 3072, avg=2768.00, stdev=429.92, samples=2 00:16:30.566 lat (msec) : 4=0.02%, 10=2.64%, 20=24.99%, 50=72.34%, 100=0.02% 00:16:30.566 cpu : usr=2.09%, sys=6.57%, ctx=714, majf=0, minf=16 00:16:30.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:30.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.566 issued rwts: total=2560,2895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.566 job3: (groupid=0, jobs=1): err= 0: pid=87102: Fri Dec 13 13:01:10 2024 00:16:30.566 read: IOPS=2163, BW=8653KiB/s (8860kB/s)(8696KiB/1005msec) 00:16:30.566 slat (usec): min=8, max=10016, avg=161.46, stdev=842.87 00:16:30.566 clat (usec): min=1828, max=49957, avg=18096.76, stdev=7056.76 00:16:30.566 lat (usec): min=7368, max=49979, avg=18258.22, stdev=7131.60 00:16:30.566 clat percentiles (usec): 00:16:30.566 | 1.00th=[ 8225], 5.00th=[10028], 10.00th=[11076], 20.00th=[12911], 00:16:30.566 | 30.00th=[14091], 40.00th=[15008], 50.00th=[15664], 60.00th=[17957], 00:16:30.566 | 70.00th=[19006], 80.00th=[23462], 90.00th=[26084], 95.00th=[32900], 00:16:30.566 | 99.00th=[42730], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:16:30.566 | 99.99th=[50070] 00:16:30.566 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:16:30.566 slat (usec): min=10, max=25922, avg=245.92, stdev=1095.43 00:16:30.566 clat (usec): min=10098, max=68977, avg=33747.60, stdev=11057.50 00:16:30.566 lat (usec): min=10121, max=69032, avg=33993.52, stdev=11126.71 00:16:30.566 clat percentiles (usec): 00:16:30.566 | 1.00th=[13042], 5.00th=[14484], 10.00th=[17695], 20.00th=[21890], 00:16:30.566 | 30.00th=[28443], 40.00th=[31327], 50.00th=[34866], 60.00th=[37487], 00:16:30.566 | 70.00th=[40109], 80.00th=[43779], 90.00th=[47449], 95.00th=[51643], 00:16:30.566 | 99.00th=[53740], 99.50th=[57934], 99.90th=[59507], 99.95th=[60031], 00:16:30.566 | 99.99th=[68682] 00:16:30.566 bw ( KiB/s): min=10232, max=10232, per=23.05%, avg=10232.00, stdev= 0.00, samples=2 00:16:30.566 iops : min= 2558, max= 2558, avg=2558.00, stdev= 0.00, samples=2 00:16:30.566 lat (msec) : 2=0.02%, 10=1.10%, 20=39.84%, 50=55.20%, 100=3.84% 00:16:30.566 cpu : usr=2.59%, sys=7.87%, ctx=337, majf=0, minf=11 00:16:30.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:30.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.566 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.566 00:16:30.566 Run status group 0 (all jobs): 00:16:30.566 READ: bw=38.4MiB/s (40.3MB/s), 8653KiB/s-10.1MiB/s (8860kB/s-10.6MB/s), io=38.7MiB (40.5MB), run=1005-1006msec 00:16:30.566 WRITE: bw=43.4MiB/s (45.5MB/s), 9.95MiB/s-11.9MiB/s (10.4MB/s-12.5MB/s), io=43.6MiB (45.7MB), run=1005-1006msec 00:16:30.566 00:16:30.566 Disk stats (read/write): 00:16:30.566 nvme0n1: ios=2385/2560, merge=0/0, ticks=19157/17414, in_queue=36571, util=87.88% 00:16:30.566 nvme0n2: ios=2097/2434, merge=0/0, ticks=21189/27030, in_queue=48219, util=88.98% 00:16:30.566 nvme0n3: ios=2126/2560, merge=0/0, ticks=18594/22191, in_queue=40785, util=87.74% 00:16:30.566 nvme0n4: ios=2048/2223, merge=0/0, ticks=17186/34335, in_queue=51521, util=89.84% 00:16:30.566 13:01:10 -- target/fio.sh@55 -- # sync 00:16:30.566 13:01:11 -- target/fio.sh@59 -- # fio_pid=87116 00:16:30.566 13:01:11 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:30.566 13:01:11 -- target/fio.sh@61 -- # sleep 3 00:16:30.566 [global] 00:16:30.566 thread=1 00:16:30.566 invalidate=1 00:16:30.566 rw=read 00:16:30.566 time_based=1 00:16:30.566 runtime=10 00:16:30.566 ioengine=libaio 00:16:30.566 direct=1 00:16:30.566 bs=4096 00:16:30.566 iodepth=1 00:16:30.566 norandommap=1 00:16:30.566 numjobs=1 00:16:30.566 00:16:30.566 [job0] 00:16:30.566 filename=/dev/nvme0n1 00:16:30.566 [job1] 00:16:30.566 filename=/dev/nvme0n2 00:16:30.566 [job2] 00:16:30.566 filename=/dev/nvme0n3 00:16:30.566 [job3] 00:16:30.566 filename=/dev/nvme0n4 00:16:30.566 Could not set queue depth (nvme0n1) 00:16:30.566 Could not set queue depth (nvme0n2) 00:16:30.566 Could not set queue depth (nvme0n3) 00:16:30.566 Could not set queue depth (nvme0n4) 00:16:30.566 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.566 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.567 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.567 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.567 fio-3.35 00:16:30.567 Starting 4 threads 00:16:33.852 13:01:14 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:33.852 fio: pid=87170, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:33.852 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43855872, buflen=4096 00:16:33.852 13:01:14 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:33.852 fio: pid=87169, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:33.852 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46927872, buflen=4096 00:16:33.852 13:01:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:33.852 13:01:14 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:34.111 fio: pid=87167, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:34.111 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51224576, buflen=4096 00:16:34.111 13:01:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.111 13:01:14 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:34.370 fio: pid=87168, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:34.370 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57929728, buflen=4096 00:16:34.370 00:16:34.370 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87167: Fri Dec 13 13:01:15 2024 00:16:34.370 read: IOPS=3650, BW=14.3MiB/s (15.0MB/s)(48.9MiB/3426msec) 00:16:34.370 slat (usec): min=9, max=9271, avg=17.54, stdev=154.09 00:16:34.370 clat (usec): min=115, max=3949, avg=254.85, stdev=54.19 00:16:34.370 lat (usec): min=126, max=9454, avg=272.39, stdev=162.74 00:16:34.370 clat percentiles (usec): 00:16:34.370 | 1.00th=[ 133], 5.00th=[ 206], 10.00th=[ 233], 20.00th=[ 241], 00:16:34.370 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:16:34.370 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:16:34.370 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 611], 99.95th=[ 766], 00:16:34.370 | 99.99th=[ 2442] 00:16:34.370 bw ( KiB/s): min=14552, max=14792, per=27.55%, avg=14601.33, stdev=94.71, samples=6 00:16:34.370 iops : min= 3638, max= 3698, avg=3650.33, stdev=23.68, samples=6 00:16:34.370 lat (usec) : 250=41.62%, 500=58.21%, 750=0.10%, 1000=0.04% 00:16:34.370 lat (msec) : 2=0.02%, 4=0.02% 00:16:34.370 cpu : usr=1.05%, sys=4.41%, ctx=12529, majf=0, minf=1 00:16:34.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.370 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.370 issued rwts: total=12507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.370 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87168: Fri Dec 13 13:01:15 2024 00:16:34.370 read: IOPS=3839, BW=15.0MiB/s (15.7MB/s)(55.2MiB/3684msec) 00:16:34.370 slat (usec): min=9, max=15846, avg=18.58, stdev=227.51 00:16:34.370 clat (usec): min=61, max=3019, avg=240.63, stdev=58.69 00:16:34.370 lat (usec): min=125, max=16038, avg=259.22, stdev=234.03 00:16:34.370 clat percentiles (usec): 00:16:34.370 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 143], 20.00th=[ 235], 00:16:34.370 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:16:34.370 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:16:34.370 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 379], 99.95th=[ 529], 00:16:34.370 | 99.99th=[ 2180] 00:16:34.370 bw ( KiB/s): min=14560, max=17339, per=28.49%, avg=15099.86, stdev=1018.67, samples=7 00:16:34.370 iops : min= 3640, max= 4334, avg=3774.86, stdev=254.39, samples=7 00:16:34.370 lat (usec) : 100=0.02%, 250=48.45%, 500=51.46%, 750=0.01%, 1000=0.01% 00:16:34.370 lat (msec) : 2=0.01%, 4=0.01% 00:16:34.370 cpu : usr=1.14%, sys=4.29%, ctx=14172, majf=0, minf=2 00:16:34.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.370 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.370 issued rwts: total=14144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.370 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87169: Fri Dec 13 13:01:15 2024 00:16:34.370 read: IOPS=3589, BW=14.0MiB/s (14.7MB/s)(44.8MiB/3192msec) 00:16:34.370 slat (usec): min=11, max=15722, avg=15.72, stdev=162.31 00:16:34.370 clat (usec): min=134, max=2596, avg=261.63, stdev=42.60 00:16:34.370 lat (usec): min=146, max=16006, avg=277.35, stdev=168.07 00:16:34.370 clat percentiles (usec): 00:16:34.370 | 1.00th=[ 169], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 247], 00:16:34.370 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:16:34.370 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:16:34.370 | 99.00th=[ 334], 99.50th=[ 367], 99.90th=[ 668], 99.95th=[ 1029], 00:16:34.370 | 99.99th=[ 1745] 00:16:34.370 bw ( KiB/s): min=14224, max=14568, per=27.34%, avg=14489.33, stdev=133.04, samples=6 00:16:34.370 iops : min= 3556, max= 3642, avg=3622.33, stdev=33.26, samples=6 00:16:34.370 lat (usec) : 250=31.00%, 500=68.70%, 750=0.22%, 1000=0.02% 00:16:34.370 lat (msec) : 2=0.04%, 4=0.01% 00:16:34.370 cpu : usr=0.78%, sys=4.11%, ctx=11460, majf=0, minf=2 00:16:34.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.370 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.370 issued rwts: total=11458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.370 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87170: Fri Dec 13 13:01:15 2024 00:16:34.370 read: IOPS=3640, BW=14.2MiB/s (14.9MB/s)(41.8MiB/2941msec) 00:16:34.370 slat (usec): min=11, max=100, avg=13.81, stdev= 3.38 00:16:34.370 clat (usec): min=140, max=1673, avg=259.52, stdev=25.88 00:16:34.370 lat (usec): min=152, max=1687, avg=273.32, stdev=26.08 00:16:34.370 clat percentiles (usec): 00:16:34.370 | 1.00th=[ 237], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:16:34.370 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:16:34.370 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:16:34.370 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 441], 99.95th=[ 758], 00:16:34.370 | 99.99th=[ 1139] 00:16:34.370 bw ( KiB/s): min=14480, max=14640, per=27.48%, avg=14566.40, stdev=68.21, samples=5 00:16:34.370 iops : min= 3620, max= 3660, avg=3641.60, stdev=17.05, samples=5 00:16:34.370 lat (usec) : 250=33.62%, 500=66.28%, 750=0.04%, 1000=0.04% 00:16:34.370 lat (msec) : 2=0.02% 00:16:34.370 cpu : usr=0.88%, sys=4.08%, ctx=10708, majf=0, minf=2 00:16:34.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.370 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.370 issued rwts: total=10708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.370 00:16:34.370 Run status group 0 (all jobs): 00:16:34.370 READ: bw=51.8MiB/s (54.3MB/s), 14.0MiB/s-15.0MiB/s (14.7MB/s-15.7MB/s), io=191MiB (200MB), run=2941-3684msec 00:16:34.370 00:16:34.370 Disk stats (read/write): 00:16:34.370 nvme0n1: ios=12267/0, merge=0/0, ticks=3168/0, in_queue=3168, util=95.37% 00:16:34.370 nvme0n2: ios=13711/0, merge=0/0, ticks=3388/0, in_queue=3388, util=95.08% 00:16:34.370 nvme0n3: ios=11236/0, merge=0/0, ticks=2987/0, in_queue=2987, util=96.12% 00:16:34.370 nvme0n4: ios=10443/0, merge=0/0, ticks=2760/0, in_queue=2760, util=96.66% 00:16:34.370 13:01:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.370 13:01:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:34.629 13:01:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.629 13:01:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:34.887 13:01:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.887 13:01:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:35.146 13:01:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:35.146 13:01:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:35.405 13:01:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:35.405 13:01:16 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:35.664 13:01:16 -- target/fio.sh@69 -- # fio_status=0 00:16:35.664 13:01:16 -- target/fio.sh@70 -- # wait 87116 00:16:35.664 13:01:16 -- target/fio.sh@70 -- # fio_status=4 00:16:35.664 13:01:16 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:35.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.664 13:01:16 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:35.664 13:01:16 -- common/autotest_common.sh@1208 -- # local i=0 00:16:35.664 13:01:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:35.664 13:01:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.664 13:01:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:35.664 13:01:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.664 nvmf hotplug test: fio failed as expected 00:16:35.664 13:01:16 -- common/autotest_common.sh@1220 -- # return 0 00:16:35.664 13:01:16 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:35.664 13:01:16 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:35.664 13:01:16 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.923 13:01:16 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:35.923 13:01:16 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:35.923 13:01:16 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:35.923 13:01:16 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:35.923 13:01:16 -- target/fio.sh@91 -- # nvmftestfini 00:16:35.923 13:01:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:35.923 13:01:16 -- nvmf/common.sh@116 -- # sync 00:16:36.182 13:01:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:36.182 13:01:16 -- nvmf/common.sh@119 -- # set +e 00:16:36.182 13:01:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:36.182 13:01:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:36.182 rmmod nvme_tcp 00:16:36.182 rmmod nvme_fabrics 00:16:36.182 rmmod nvme_keyring 00:16:36.182 13:01:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:36.182 13:01:16 -- nvmf/common.sh@123 -- # set -e 00:16:36.182 13:01:16 -- nvmf/common.sh@124 -- # return 0 00:16:36.182 13:01:16 -- nvmf/common.sh@477 -- # '[' -n 86630 ']' 00:16:36.182 13:01:16 -- nvmf/common.sh@478 -- # killprocess 86630 00:16:36.182 13:01:16 -- common/autotest_common.sh@936 -- # '[' -z 86630 ']' 00:16:36.182 13:01:16 -- common/autotest_common.sh@940 -- # kill -0 86630 00:16:36.182 13:01:16 -- common/autotest_common.sh@941 -- # uname 00:16:36.182 13:01:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.182 13:01:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86630 00:16:36.182 killing process with pid 86630 00:16:36.182 13:01:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:36.182 13:01:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:36.182 13:01:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86630' 00:16:36.182 13:01:16 -- common/autotest_common.sh@955 -- # kill 86630 00:16:36.182 13:01:16 -- common/autotest_common.sh@960 -- # wait 86630 00:16:36.441 13:01:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:36.441 13:01:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:36.441 13:01:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:36.441 13:01:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.441 13:01:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:36.441 13:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.441 13:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.441 13:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.441 13:01:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:36.441 ************************************ 00:16:36.441 END TEST nvmf_fio_target 00:16:36.441 ************************************ 00:16:36.441 00:16:36.441 real 0m19.328s 00:16:36.441 user 1m14.323s 00:16:36.441 sys 0m7.954s 00:16:36.441 13:01:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:36.441 13:01:17 -- common/autotest_common.sh@10 -- # set +x 00:16:36.441 13:01:17 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:36.441 13:01:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:36.441 13:01:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:36.441 13:01:17 -- common/autotest_common.sh@10 -- # set +x 00:16:36.441 ************************************ 00:16:36.441 START TEST nvmf_bdevio 00:16:36.441 ************************************ 00:16:36.441 13:01:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:36.441 * Looking for test storage... 00:16:36.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:36.441 13:01:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:36.441 13:01:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:36.441 13:01:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:36.700 13:01:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:36.700 13:01:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:36.700 13:01:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:36.700 13:01:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:36.700 13:01:17 -- scripts/common.sh@335 -- # IFS=.-: 00:16:36.700 13:01:17 -- scripts/common.sh@335 -- # read -ra ver1 00:16:36.700 13:01:17 -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.700 13:01:17 -- scripts/common.sh@336 -- # read -ra ver2 00:16:36.700 13:01:17 -- scripts/common.sh@337 -- # local 'op=<' 00:16:36.700 13:01:17 -- scripts/common.sh@339 -- # ver1_l=2 00:16:36.700 13:01:17 -- scripts/common.sh@340 -- # ver2_l=1 00:16:36.700 13:01:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:36.700 13:01:17 -- scripts/common.sh@343 -- # case "$op" in 00:16:36.701 13:01:17 -- scripts/common.sh@344 -- # : 1 00:16:36.701 13:01:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:36.701 13:01:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.701 13:01:17 -- scripts/common.sh@364 -- # decimal 1 00:16:36.701 13:01:17 -- scripts/common.sh@352 -- # local d=1 00:16:36.701 13:01:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.701 13:01:17 -- scripts/common.sh@354 -- # echo 1 00:16:36.701 13:01:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:36.701 13:01:17 -- scripts/common.sh@365 -- # decimal 2 00:16:36.701 13:01:17 -- scripts/common.sh@352 -- # local d=2 00:16:36.701 13:01:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.701 13:01:17 -- scripts/common.sh@354 -- # echo 2 00:16:36.701 13:01:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:36.701 13:01:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:36.701 13:01:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:36.701 13:01:17 -- scripts/common.sh@367 -- # return 0 00:16:36.701 13:01:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.701 13:01:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:36.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.701 --rc genhtml_branch_coverage=1 00:16:36.701 --rc genhtml_function_coverage=1 00:16:36.701 --rc genhtml_legend=1 00:16:36.701 --rc geninfo_all_blocks=1 00:16:36.701 --rc geninfo_unexecuted_blocks=1 00:16:36.701 00:16:36.701 ' 00:16:36.701 13:01:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:36.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.701 --rc genhtml_branch_coverage=1 00:16:36.701 --rc genhtml_function_coverage=1 00:16:36.701 --rc genhtml_legend=1 00:16:36.701 --rc geninfo_all_blocks=1 00:16:36.701 --rc geninfo_unexecuted_blocks=1 00:16:36.701 00:16:36.701 ' 00:16:36.701 13:01:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:36.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.701 --rc genhtml_branch_coverage=1 00:16:36.701 --rc genhtml_function_coverage=1 00:16:36.701 --rc genhtml_legend=1 00:16:36.701 --rc geninfo_all_blocks=1 00:16:36.701 --rc geninfo_unexecuted_blocks=1 00:16:36.701 00:16:36.701 ' 00:16:36.701 13:01:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:36.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.701 --rc genhtml_branch_coverage=1 00:16:36.701 --rc genhtml_function_coverage=1 00:16:36.701 --rc genhtml_legend=1 00:16:36.701 --rc geninfo_all_blocks=1 00:16:36.701 --rc geninfo_unexecuted_blocks=1 00:16:36.701 00:16:36.701 ' 00:16:36.701 13:01:17 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.701 13:01:17 -- nvmf/common.sh@7 -- # uname -s 00:16:36.701 13:01:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.701 13:01:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.701 13:01:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.701 13:01:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.701 13:01:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.701 13:01:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.701 13:01:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.701 13:01:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.701 13:01:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.701 13:01:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.701 13:01:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:36.701 13:01:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:36.701 13:01:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.701 13:01:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.701 13:01:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.701 13:01:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.701 13:01:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.701 13:01:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.701 13:01:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.701 13:01:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.701 13:01:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.701 13:01:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.701 13:01:17 -- paths/export.sh@5 -- # export PATH 00:16:36.701 13:01:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.701 13:01:17 -- nvmf/common.sh@46 -- # : 0 00:16:36.701 13:01:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:36.701 13:01:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:36.701 13:01:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:36.701 13:01:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.701 13:01:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.701 13:01:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:36.701 13:01:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:36.701 13:01:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:36.701 13:01:17 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:36.701 13:01:17 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:36.701 13:01:17 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:36.701 13:01:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:36.701 13:01:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.701 13:01:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:36.701 13:01:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:36.701 13:01:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:36.701 13:01:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.701 13:01:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.701 13:01:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.701 13:01:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:36.701 13:01:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:36.701 13:01:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:36.701 13:01:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:36.701 13:01:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:36.701 13:01:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:36.701 13:01:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.701 13:01:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.701 13:01:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:36.701 13:01:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:36.701 13:01:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.701 13:01:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.701 13:01:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.701 13:01:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.701 13:01:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.701 13:01:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.701 13:01:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.701 13:01:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.701 13:01:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:36.701 13:01:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:36.701 Cannot find device "nvmf_tgt_br" 00:16:36.701 13:01:17 -- nvmf/common.sh@154 -- # true 00:16:36.701 13:01:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.701 Cannot find device "nvmf_tgt_br2" 00:16:36.701 13:01:17 -- nvmf/common.sh@155 -- # true 00:16:36.701 13:01:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:36.701 13:01:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:36.701 Cannot find device "nvmf_tgt_br" 00:16:36.701 13:01:17 -- nvmf/common.sh@157 -- # true 00:16:36.701 13:01:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:36.701 Cannot find device "nvmf_tgt_br2" 00:16:36.701 13:01:17 -- nvmf/common.sh@158 -- # true 00:16:36.701 13:01:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:36.701 13:01:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:36.701 13:01:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.701 13:01:17 -- nvmf/common.sh@161 -- # true 00:16:36.701 13:01:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.701 13:01:17 -- nvmf/common.sh@162 -- # true 00:16:36.701 13:01:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.701 13:01:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.701 13:01:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.701 13:01:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.701 13:01:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.701 13:01:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.701 13:01:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.702 13:01:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.702 13:01:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:36.702 13:01:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:36.702 13:01:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:36.702 13:01:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:36.702 13:01:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:36.702 13:01:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.961 13:01:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.961 13:01:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.961 13:01:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:36.961 13:01:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:36.961 13:01:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.961 13:01:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.961 13:01:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.961 13:01:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.961 13:01:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.961 13:01:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:36.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:36.961 00:16:36.961 --- 10.0.0.2 ping statistics --- 00:16:36.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.961 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:36.961 13:01:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:36.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:16:36.961 00:16:36.961 --- 10.0.0.3 ping statistics --- 00:16:36.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.961 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:36.961 13:01:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:36.961 00:16:36.961 --- 10.0.0.1 ping statistics --- 00:16:36.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.961 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:36.961 13:01:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.961 13:01:17 -- nvmf/common.sh@421 -- # return 0 00:16:36.961 13:01:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:36.961 13:01:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.961 13:01:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:36.961 13:01:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:36.961 13:01:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.961 13:01:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:36.961 13:01:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:36.961 13:01:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:36.961 13:01:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:36.961 13:01:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.961 13:01:17 -- common/autotest_common.sh@10 -- # set +x 00:16:36.961 13:01:17 -- nvmf/common.sh@469 -- # nvmfpid=87496 00:16:36.961 13:01:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:36.961 13:01:17 -- nvmf/common.sh@470 -- # waitforlisten 87496 00:16:36.961 13:01:17 -- common/autotest_common.sh@829 -- # '[' -z 87496 ']' 00:16:36.961 13:01:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.961 13:01:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.961 13:01:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.961 13:01:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.961 13:01:17 -- common/autotest_common.sh@10 -- # set +x 00:16:36.961 [2024-12-13 13:01:17.646944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:36.961 [2024-12-13 13:01:17.647290] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.220 [2024-12-13 13:01:17.788924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.220 [2024-12-13 13:01:17.849708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:37.220 [2024-12-13 13:01:17.850347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.220 [2024-12-13 13:01:17.850539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.220 [2024-12-13 13:01:17.850960] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.220 [2024-12-13 13:01:17.851323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:37.220 [2024-12-13 13:01:17.851480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:37.220 [2024-12-13 13:01:17.851620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:37.220 [2024-12-13 13:01:17.851638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.788 13:01:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.788 13:01:18 -- common/autotest_common.sh@862 -- # return 0 00:16:37.788 13:01:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:37.788 13:01:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:37.788 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:16:37.788 13:01:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.788 13:01:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:37.788 13:01:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.788 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:16:38.047 [2024-12-13 13:01:18.572655] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.047 13:01:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.047 13:01:18 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:38.047 13:01:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.047 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:16:38.047 Malloc0 00:16:38.047 13:01:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.047 13:01:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:38.047 13:01:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.047 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:16:38.047 13:01:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.047 13:01:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:38.047 13:01:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.047 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:16:38.047 13:01:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.047 13:01:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.047 13:01:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.047 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:16:38.047 [2024-12-13 13:01:18.640373] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.047 13:01:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.047 13:01:18 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:38.047 13:01:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:38.047 13:01:18 -- nvmf/common.sh@520 -- # config=() 00:16:38.047 13:01:18 -- nvmf/common.sh@520 -- # local subsystem config 00:16:38.047 13:01:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:38.047 13:01:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:38.047 { 00:16:38.047 "params": { 00:16:38.047 "name": "Nvme$subsystem", 00:16:38.047 "trtype": "$TEST_TRANSPORT", 00:16:38.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:38.047 "adrfam": "ipv4", 00:16:38.047 "trsvcid": "$NVMF_PORT", 00:16:38.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:38.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:38.047 "hdgst": ${hdgst:-false}, 00:16:38.047 "ddgst": ${ddgst:-false} 00:16:38.047 }, 00:16:38.047 "method": "bdev_nvme_attach_controller" 00:16:38.047 } 00:16:38.047 EOF 00:16:38.047 )") 00:16:38.047 13:01:18 -- nvmf/common.sh@542 -- # cat 00:16:38.047 13:01:18 -- nvmf/common.sh@544 -- # jq . 00:16:38.047 13:01:18 -- nvmf/common.sh@545 -- # IFS=, 00:16:38.047 13:01:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:38.047 "params": { 00:16:38.047 "name": "Nvme1", 00:16:38.047 "trtype": "tcp", 00:16:38.047 "traddr": "10.0.0.2", 00:16:38.047 "adrfam": "ipv4", 00:16:38.047 "trsvcid": "4420", 00:16:38.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:38.047 "hdgst": false, 00:16:38.047 "ddgst": false 00:16:38.047 }, 00:16:38.047 "method": "bdev_nvme_attach_controller" 00:16:38.047 }' 00:16:38.047 [2024-12-13 13:01:18.698042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:38.047 [2024-12-13 13:01:18.698131] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87550 ] 00:16:38.305 [2024-12-13 13:01:18.839147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:38.305 [2024-12-13 13:01:18.903345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.305 [2024-12-13 13:01:18.903490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.305 [2024-12-13 13:01:18.903499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.305 [2024-12-13 13:01:19.077410] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:38.305 [2024-12-13 13:01:19.077695] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:38.305 I/O targets: 00:16:38.305 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:38.305 00:16:38.305 00:16:38.305 CUnit - A unit testing framework for C - Version 2.1-3 00:16:38.305 http://cunit.sourceforge.net/ 00:16:38.305 00:16:38.305 00:16:38.305 Suite: bdevio tests on: Nvme1n1 00:16:38.563 Test: blockdev write read block ...passed 00:16:38.563 Test: blockdev write zeroes read block ...passed 00:16:38.563 Test: blockdev write zeroes read no split ...passed 00:16:38.563 Test: blockdev write zeroes read split ...passed 00:16:38.563 Test: blockdev write zeroes read split partial ...passed 00:16:38.563 Test: blockdev reset ...[2024-12-13 13:01:19.193723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:38.563 [2024-12-13 13:01:19.193973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02ee0 (9): Bad file descriptor 00:16:38.563 passed 00:16:38.563 Test: blockdev write read 8 blocks ...[2024-12-13 13:01:19.206688] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:38.563 passed 00:16:38.563 Test: blockdev write read size > 128k ...passed 00:16:38.563 Test: blockdev write read invalid size ...passed 00:16:38.563 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:38.563 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:38.563 Test: blockdev write read max offset ...passed 00:16:38.563 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:38.563 Test: blockdev writev readv 8 blocks ...passed 00:16:38.563 Test: blockdev writev readv 30 x 1block ...passed 00:16:38.822 Test: blockdev writev readv block ...passed 00:16:38.822 Test: blockdev writev readv size > 128k ...passed 00:16:38.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:38.822 Test: blockdev comparev and writev ...[2024-12-13 13:01:19.378182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.822 [2024-12-13 13:01:19.378382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.822 [2024-12-13 13:01:19.378427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.822 [2024-12-13 13:01:19.378440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.822 [2024-12-13 13:01:19.378740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.822 [2024-12-13 13:01:19.378773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:38.822 [2024-12-13 13:01:19.378802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.822 [2024-12-13 13:01:19.378815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:38.822 [2024-12-13 13:01:19.379099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.822 [2024-12-13 13:01:19.379117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:38.822 [2024-12-13 13:01:19.379134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.822 [2024-12-13 13:01:19.379144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:38.822 [2024-12-13 13:01:19.379433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.822 [2024-12-13 13:01:19.379465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:38.822 [2024-12-13 13:01:19.379481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.822 [2024-12-13 13:01:19.379491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:38.822 passed 00:16:38.822 Test: blockdev nvme passthru rw ...passed 00:16:38.822 Test: blockdev nvme passthru vendor specific ...[2024-12-13 13:01:19.461217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:38.822 [2024-12-13 13:01:19.461247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:38.822 [2024-12-13 13:01:19.461366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:38.822 [2024-12-13 13:01:19.461382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:38.822 [2024-12-13 13:01:19.461492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:38.822 [2024-12-13 13:01:19.461507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:38.822 passed 00:16:38.822 Test: blockdev nvme admin passthru ...[2024-12-13 13:01:19.461611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:38.822 [2024-12-13 13:01:19.461626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:38.822 passed 00:16:38.822 Test: blockdev copy ...passed 00:16:38.822 00:16:38.822 Run Summary: Type Total Ran Passed Failed Inactive 00:16:38.822 suites 1 1 n/a 0 0 00:16:38.822 tests 23 23 23 0 0 00:16:38.822 asserts 152 152 152 0 n/a 00:16:38.822 00:16:38.822 Elapsed time = 0.875 seconds 00:16:39.080 13:01:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.080 13:01:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.080 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:16:39.081 13:01:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.081 13:01:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:39.081 13:01:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:39.081 13:01:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:39.081 13:01:19 -- nvmf/common.sh@116 -- # sync 00:16:39.081 13:01:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:39.081 13:01:19 -- nvmf/common.sh@119 -- # set +e 00:16:39.081 13:01:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:39.081 13:01:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:39.081 rmmod nvme_tcp 00:16:39.081 rmmod nvme_fabrics 00:16:39.081 rmmod nvme_keyring 00:16:39.081 13:01:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:39.081 13:01:19 -- nvmf/common.sh@123 -- # set -e 00:16:39.081 13:01:19 -- nvmf/common.sh@124 -- # return 0 00:16:39.081 13:01:19 -- nvmf/common.sh@477 -- # '[' -n 87496 ']' 00:16:39.081 13:01:19 -- nvmf/common.sh@478 -- # killprocess 87496 00:16:39.081 13:01:19 -- common/autotest_common.sh@936 -- # '[' -z 87496 ']' 00:16:39.081 13:01:19 -- common/autotest_common.sh@940 -- # kill -0 87496 00:16:39.081 13:01:19 -- common/autotest_common.sh@941 -- # uname 00:16:39.081 13:01:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.081 13:01:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87496 00:16:39.081 13:01:19 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:39.081 13:01:19 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:39.081 killing process with pid 87496 00:16:39.081 13:01:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87496' 00:16:39.081 13:01:19 -- common/autotest_common.sh@955 -- # kill 87496 00:16:39.081 13:01:19 -- common/autotest_common.sh@960 -- # wait 87496 00:16:39.339 13:01:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:39.339 13:01:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:39.339 13:01:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:39.339 13:01:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.339 13:01:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:39.339 13:01:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.339 13:01:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.339 13:01:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.340 13:01:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:39.340 00:16:39.340 real 0m3.025s 00:16:39.340 user 0m10.691s 00:16:39.340 sys 0m0.803s 00:16:39.340 13:01:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:39.340 13:01:20 -- common/autotest_common.sh@10 -- # set +x 00:16:39.340 ************************************ 00:16:39.340 END TEST nvmf_bdevio 00:16:39.340 ************************************ 00:16:39.636 13:01:20 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:39.636 13:01:20 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:39.636 13:01:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:39.636 13:01:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.636 13:01:20 -- common/autotest_common.sh@10 -- # set +x 00:16:39.636 ************************************ 00:16:39.636 START TEST nvmf_bdevio_no_huge 00:16:39.636 ************************************ 00:16:39.636 13:01:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:39.636 * Looking for test storage... 00:16:39.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:39.636 13:01:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:39.636 13:01:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:39.636 13:01:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:39.636 13:01:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:39.636 13:01:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:39.636 13:01:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:39.636 13:01:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:39.636 13:01:20 -- scripts/common.sh@335 -- # IFS=.-: 00:16:39.636 13:01:20 -- scripts/common.sh@335 -- # read -ra ver1 00:16:39.636 13:01:20 -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.636 13:01:20 -- scripts/common.sh@336 -- # read -ra ver2 00:16:39.636 13:01:20 -- scripts/common.sh@337 -- # local 'op=<' 00:16:39.636 13:01:20 -- scripts/common.sh@339 -- # ver1_l=2 00:16:39.636 13:01:20 -- scripts/common.sh@340 -- # ver2_l=1 00:16:39.636 13:01:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:39.636 13:01:20 -- scripts/common.sh@343 -- # case "$op" in 00:16:39.636 13:01:20 -- scripts/common.sh@344 -- # : 1 00:16:39.636 13:01:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:39.636 13:01:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.636 13:01:20 -- scripts/common.sh@364 -- # decimal 1 00:16:39.636 13:01:20 -- scripts/common.sh@352 -- # local d=1 00:16:39.636 13:01:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.637 13:01:20 -- scripts/common.sh@354 -- # echo 1 00:16:39.637 13:01:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:39.637 13:01:20 -- scripts/common.sh@365 -- # decimal 2 00:16:39.637 13:01:20 -- scripts/common.sh@352 -- # local d=2 00:16:39.637 13:01:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.637 13:01:20 -- scripts/common.sh@354 -- # echo 2 00:16:39.637 13:01:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:39.637 13:01:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:39.637 13:01:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:39.637 13:01:20 -- scripts/common.sh@367 -- # return 0 00:16:39.637 13:01:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.637 13:01:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:39.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.637 --rc genhtml_branch_coverage=1 00:16:39.637 --rc genhtml_function_coverage=1 00:16:39.637 --rc genhtml_legend=1 00:16:39.637 --rc geninfo_all_blocks=1 00:16:39.637 --rc geninfo_unexecuted_blocks=1 00:16:39.637 00:16:39.637 ' 00:16:39.637 13:01:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:39.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.637 --rc genhtml_branch_coverage=1 00:16:39.637 --rc genhtml_function_coverage=1 00:16:39.637 --rc genhtml_legend=1 00:16:39.637 --rc geninfo_all_blocks=1 00:16:39.637 --rc geninfo_unexecuted_blocks=1 00:16:39.637 00:16:39.637 ' 00:16:39.637 13:01:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:39.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.637 --rc genhtml_branch_coverage=1 00:16:39.637 --rc genhtml_function_coverage=1 00:16:39.637 --rc genhtml_legend=1 00:16:39.637 --rc geninfo_all_blocks=1 00:16:39.637 --rc geninfo_unexecuted_blocks=1 00:16:39.637 00:16:39.637 ' 00:16:39.637 13:01:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:39.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.637 --rc genhtml_branch_coverage=1 00:16:39.637 --rc genhtml_function_coverage=1 00:16:39.637 --rc genhtml_legend=1 00:16:39.637 --rc geninfo_all_blocks=1 00:16:39.637 --rc geninfo_unexecuted_blocks=1 00:16:39.637 00:16:39.637 ' 00:16:39.637 13:01:20 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.637 13:01:20 -- nvmf/common.sh@7 -- # uname -s 00:16:39.637 13:01:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.637 13:01:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.637 13:01:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.637 13:01:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.637 13:01:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.637 13:01:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.637 13:01:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.637 13:01:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.637 13:01:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.637 13:01:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.637 13:01:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:39.637 13:01:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:39.637 13:01:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.637 13:01:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.637 13:01:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.637 13:01:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.637 13:01:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.637 13:01:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.637 13:01:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.637 13:01:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.637 13:01:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.637 13:01:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.637 13:01:20 -- paths/export.sh@5 -- # export PATH 00:16:39.637 13:01:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.637 13:01:20 -- nvmf/common.sh@46 -- # : 0 00:16:39.637 13:01:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.637 13:01:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.637 13:01:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.637 13:01:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.637 13:01:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.637 13:01:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.637 13:01:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.637 13:01:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.637 13:01:20 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.637 13:01:20 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.637 13:01:20 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:39.637 13:01:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:39.637 13:01:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.637 13:01:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.637 13:01:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.637 13:01:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.637 13:01:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.637 13:01:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.637 13:01:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.637 13:01:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:39.637 13:01:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:39.637 13:01:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:39.637 13:01:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:39.638 13:01:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:39.638 13:01:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:39.638 13:01:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.638 13:01:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.638 13:01:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:39.638 13:01:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:39.638 13:01:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:39.638 13:01:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:39.638 13:01:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:39.638 13:01:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.638 13:01:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:39.638 13:01:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:39.638 13:01:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:39.638 13:01:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:39.638 13:01:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:39.638 13:01:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:39.638 Cannot find device "nvmf_tgt_br" 00:16:39.638 13:01:20 -- nvmf/common.sh@154 -- # true 00:16:39.638 13:01:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.638 Cannot find device "nvmf_tgt_br2" 00:16:39.638 13:01:20 -- nvmf/common.sh@155 -- # true 00:16:39.638 13:01:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:39.638 13:01:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:39.638 Cannot find device "nvmf_tgt_br" 00:16:39.638 13:01:20 -- nvmf/common.sh@157 -- # true 00:16:39.638 13:01:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:39.897 Cannot find device "nvmf_tgt_br2" 00:16:39.897 13:01:20 -- nvmf/common.sh@158 -- # true 00:16:39.897 13:01:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:39.897 13:01:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:39.897 13:01:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.897 13:01:20 -- nvmf/common.sh@161 -- # true 00:16:39.897 13:01:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.897 13:01:20 -- nvmf/common.sh@162 -- # true 00:16:39.897 13:01:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:39.897 13:01:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:39.897 13:01:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:39.897 13:01:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:39.897 13:01:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:39.897 13:01:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:39.897 13:01:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:39.897 13:01:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:39.897 13:01:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:39.897 13:01:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:39.897 13:01:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:39.897 13:01:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:39.897 13:01:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:39.897 13:01:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:39.897 13:01:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:39.897 13:01:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:39.897 13:01:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:39.897 13:01:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:39.897 13:01:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:39.897 13:01:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:39.897 13:01:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:39.897 13:01:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:39.897 13:01:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:39.897 13:01:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:39.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:16:39.897 00:16:39.897 --- 10.0.0.2 ping statistics --- 00:16:39.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.897 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:39.897 13:01:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:39.897 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:39.897 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:39.897 00:16:39.897 --- 10.0.0.3 ping statistics --- 00:16:39.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.897 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:39.897 13:01:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:39.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:39.897 00:16:39.897 --- 10.0.0.1 ping statistics --- 00:16:39.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.897 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:39.897 13:01:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.897 13:01:20 -- nvmf/common.sh@421 -- # return 0 00:16:39.897 13:01:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:39.897 13:01:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.897 13:01:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:39.897 13:01:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:39.897 13:01:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.897 13:01:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:39.897 13:01:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:39.897 13:01:20 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:39.897 13:01:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:39.897 13:01:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:39.897 13:01:20 -- common/autotest_common.sh@10 -- # set +x 00:16:39.897 13:01:20 -- nvmf/common.sh@469 -- # nvmfpid=87738 00:16:39.898 13:01:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:39.898 13:01:20 -- nvmf/common.sh@470 -- # waitforlisten 87738 00:16:39.898 13:01:20 -- common/autotest_common.sh@829 -- # '[' -z 87738 ']' 00:16:39.898 13:01:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.898 13:01:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.898 13:01:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.898 13:01:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.898 13:01:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.156 [2024-12-13 13:01:20.720970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:40.156 [2024-12-13 13:01:20.721072] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:40.157 [2024-12-13 13:01:20.867281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.415 [2024-12-13 13:01:20.950325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:40.415 [2024-12-13 13:01:20.950443] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.415 [2024-12-13 13:01:20.950454] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.415 [2024-12-13 13:01:20.950461] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.415 [2024-12-13 13:01:20.950620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.415 [2024-12-13 13:01:20.951322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:40.415 [2024-12-13 13:01:20.951459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:40.415 [2024-12-13 13:01:20.951633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.982 13:01:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.982 13:01:21 -- common/autotest_common.sh@862 -- # return 0 00:16:40.982 13:01:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:40.982 13:01:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:40.982 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.982 13:01:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.982 13:01:21 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.982 13:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.982 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.982 [2024-12-13 13:01:21.706826] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.982 13:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.982 13:01:21 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.982 13:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.982 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.982 Malloc0 00:16:40.982 13:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.982 13:01:21 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:40.982 13:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.982 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.982 13:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.982 13:01:21 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.982 13:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.982 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.982 13:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.982 13:01:21 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.982 13:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.982 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.982 [2024-12-13 13:01:21.744914] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.982 13:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.982 13:01:21 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:40.982 13:01:21 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:40.982 13:01:21 -- nvmf/common.sh@520 -- # config=() 00:16:40.982 13:01:21 -- nvmf/common.sh@520 -- # local subsystem config 00:16:40.982 13:01:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:40.982 13:01:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:40.982 { 00:16:40.982 "params": { 00:16:40.982 "name": "Nvme$subsystem", 00:16:40.982 "trtype": "$TEST_TRANSPORT", 00:16:40.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.982 "adrfam": "ipv4", 00:16:40.982 "trsvcid": "$NVMF_PORT", 00:16:40.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.982 "hdgst": ${hdgst:-false}, 00:16:40.982 "ddgst": ${ddgst:-false} 00:16:40.982 }, 00:16:40.982 "method": "bdev_nvme_attach_controller" 00:16:40.982 } 00:16:40.982 EOF 00:16:40.982 )") 00:16:40.982 13:01:21 -- nvmf/common.sh@542 -- # cat 00:16:40.982 13:01:21 -- nvmf/common.sh@544 -- # jq . 00:16:41.241 13:01:21 -- nvmf/common.sh@545 -- # IFS=, 00:16:41.241 13:01:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:41.241 "params": { 00:16:41.241 "name": "Nvme1", 00:16:41.241 "trtype": "tcp", 00:16:41.241 "traddr": "10.0.0.2", 00:16:41.241 "adrfam": "ipv4", 00:16:41.241 "trsvcid": "4420", 00:16:41.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.241 "hdgst": false, 00:16:41.241 "ddgst": false 00:16:41.241 }, 00:16:41.241 "method": "bdev_nvme_attach_controller" 00:16:41.241 }' 00:16:41.241 [2024-12-13 13:01:21.802967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:41.241 [2024-12-13 13:01:21.803085] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87792 ] 00:16:41.241 [2024-12-13 13:01:21.945573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.500 [2024-12-13 13:01:22.078920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.500 [2024-12-13 13:01:22.079108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.500 [2024-12-13 13:01:22.079115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.758 [2024-12-13 13:01:22.280269] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:41.758 [2024-12-13 13:01:22.280323] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:41.758 I/O targets: 00:16:41.758 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:41.758 00:16:41.758 00:16:41.758 CUnit - A unit testing framework for C - Version 2.1-3 00:16:41.759 http://cunit.sourceforge.net/ 00:16:41.759 00:16:41.759 00:16:41.759 Suite: bdevio tests on: Nvme1n1 00:16:41.759 Test: blockdev write read block ...passed 00:16:41.759 Test: blockdev write zeroes read block ...passed 00:16:41.759 Test: blockdev write zeroes read no split ...passed 00:16:41.759 Test: blockdev write zeroes read split ...passed 00:16:41.759 Test: blockdev write zeroes read split partial ...passed 00:16:41.759 Test: blockdev reset ...[2024-12-13 13:01:22.410499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:41.759 [2024-12-13 13:01:22.410604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110ad10 (9): Bad file descriptor 00:16:41.759 passed 00:16:41.759 Test: blockdev write read 8 blocks ...[2024-12-13 13:01:22.427677] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:41.759 passed 00:16:41.759 Test: blockdev write read size > 128k ...passed 00:16:41.759 Test: blockdev write read invalid size ...passed 00:16:41.759 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:41.759 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:41.759 Test: blockdev write read max offset ...passed 00:16:42.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:42.018 Test: blockdev writev readv 8 blocks ...passed 00:16:42.018 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.018 Test: blockdev writev readv block ...passed 00:16:42.018 Test: blockdev writev readv size > 128k ...passed 00:16:42.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.018 Test: blockdev comparev and writev ...[2024-12-13 13:01:22.600122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.018 [2024-12-13 13:01:22.600193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.600212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.018 [2024-12-13 13:01:22.600223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.600492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.018 [2024-12-13 13:01:22.600510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.600526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.018 [2024-12-13 13:01:22.600537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.600828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.018 [2024-12-13 13:01:22.600847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.600863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.018 [2024-12-13 13:01:22.600874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.601156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.018 [2024-12-13 13:01:22.601173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.601189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.018 [2024-12-13 13:01:22.601199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.018 passed 00:16:42.018 Test: blockdev nvme passthru rw ...passed 00:16:42.018 Test: blockdev nvme passthru vendor specific ...[2024-12-13 13:01:22.683097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.018 [2024-12-13 13:01:22.683122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.683238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.018 [2024-12-13 13:01:22.683254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.683365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.018 [2024-12-13 13:01:22.683380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.018 [2024-12-13 13:01:22.683487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.018 [2024-12-13 13:01:22.683502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.018 passed 00:16:42.018 Test: blockdev nvme admin passthru ...passed 00:16:42.018 Test: blockdev copy ...passed 00:16:42.018 00:16:42.018 Run Summary: Type Total Ran Passed Failed Inactive 00:16:42.018 suites 1 1 n/a 0 0 00:16:42.018 tests 23 23 23 0 0 00:16:42.018 asserts 152 152 152 0 n/a 00:16:42.018 00:16:42.018 Elapsed time = 0.931 seconds 00:16:42.277 13:01:23 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.277 13:01:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.277 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:16:42.535 13:01:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.535 13:01:23 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:42.535 13:01:23 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:42.535 13:01:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.535 13:01:23 -- nvmf/common.sh@116 -- # sync 00:16:42.535 13:01:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:42.535 13:01:23 -- nvmf/common.sh@119 -- # set +e 00:16:42.535 13:01:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.535 13:01:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:42.535 rmmod nvme_tcp 00:16:42.535 rmmod nvme_fabrics 00:16:42.535 rmmod nvme_keyring 00:16:42.535 13:01:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.535 13:01:23 -- nvmf/common.sh@123 -- # set -e 00:16:42.535 13:01:23 -- nvmf/common.sh@124 -- # return 0 00:16:42.535 13:01:23 -- nvmf/common.sh@477 -- # '[' -n 87738 ']' 00:16:42.535 13:01:23 -- nvmf/common.sh@478 -- # killprocess 87738 00:16:42.535 13:01:23 -- common/autotest_common.sh@936 -- # '[' -z 87738 ']' 00:16:42.535 13:01:23 -- common/autotest_common.sh@940 -- # kill -0 87738 00:16:42.535 13:01:23 -- common/autotest_common.sh@941 -- # uname 00:16:42.535 13:01:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.535 13:01:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87738 00:16:42.535 13:01:23 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:42.535 13:01:23 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:42.535 killing process with pid 87738 00:16:42.535 13:01:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87738' 00:16:42.535 13:01:23 -- common/autotest_common.sh@955 -- # kill 87738 00:16:42.535 13:01:23 -- common/autotest_common.sh@960 -- # wait 87738 00:16:42.793 13:01:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:42.793 13:01:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:42.793 13:01:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:42.793 13:01:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.793 13:01:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:42.793 13:01:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.793 13:01:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.793 13:01:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.052 13:01:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:43.052 00:16:43.052 real 0m3.452s 00:16:43.052 user 0m12.333s 00:16:43.052 sys 0m1.319s 00:16:43.052 13:01:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:43.052 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 ************************************ 00:16:43.052 END TEST nvmf_bdevio_no_huge 00:16:43.052 ************************************ 00:16:43.052 13:01:23 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:43.052 13:01:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.052 13:01:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.052 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 ************************************ 00:16:43.052 START TEST nvmf_tls 00:16:43.052 ************************************ 00:16:43.052 13:01:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:43.052 * Looking for test storage... 00:16:43.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:43.052 13:01:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:43.052 13:01:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:43.052 13:01:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:43.052 13:01:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:43.052 13:01:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:43.052 13:01:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:43.052 13:01:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:43.052 13:01:23 -- scripts/common.sh@335 -- # IFS=.-: 00:16:43.052 13:01:23 -- scripts/common.sh@335 -- # read -ra ver1 00:16:43.052 13:01:23 -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.052 13:01:23 -- scripts/common.sh@336 -- # read -ra ver2 00:16:43.052 13:01:23 -- scripts/common.sh@337 -- # local 'op=<' 00:16:43.052 13:01:23 -- scripts/common.sh@339 -- # ver1_l=2 00:16:43.052 13:01:23 -- scripts/common.sh@340 -- # ver2_l=1 00:16:43.052 13:01:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:43.052 13:01:23 -- scripts/common.sh@343 -- # case "$op" in 00:16:43.052 13:01:23 -- scripts/common.sh@344 -- # : 1 00:16:43.052 13:01:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:43.052 13:01:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.052 13:01:23 -- scripts/common.sh@364 -- # decimal 1 00:16:43.052 13:01:23 -- scripts/common.sh@352 -- # local d=1 00:16:43.052 13:01:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.052 13:01:23 -- scripts/common.sh@354 -- # echo 1 00:16:43.052 13:01:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:43.052 13:01:23 -- scripts/common.sh@365 -- # decimal 2 00:16:43.052 13:01:23 -- scripts/common.sh@352 -- # local d=2 00:16:43.052 13:01:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.052 13:01:23 -- scripts/common.sh@354 -- # echo 2 00:16:43.052 13:01:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:43.052 13:01:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:43.052 13:01:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:43.052 13:01:23 -- scripts/common.sh@367 -- # return 0 00:16:43.052 13:01:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.052 13:01:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:43.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.052 --rc genhtml_branch_coverage=1 00:16:43.052 --rc genhtml_function_coverage=1 00:16:43.052 --rc genhtml_legend=1 00:16:43.052 --rc geninfo_all_blocks=1 00:16:43.052 --rc geninfo_unexecuted_blocks=1 00:16:43.052 00:16:43.052 ' 00:16:43.052 13:01:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:43.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.052 --rc genhtml_branch_coverage=1 00:16:43.052 --rc genhtml_function_coverage=1 00:16:43.052 --rc genhtml_legend=1 00:16:43.052 --rc geninfo_all_blocks=1 00:16:43.052 --rc geninfo_unexecuted_blocks=1 00:16:43.052 00:16:43.052 ' 00:16:43.052 13:01:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:43.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.052 --rc genhtml_branch_coverage=1 00:16:43.052 --rc genhtml_function_coverage=1 00:16:43.052 --rc genhtml_legend=1 00:16:43.052 --rc geninfo_all_blocks=1 00:16:43.052 --rc geninfo_unexecuted_blocks=1 00:16:43.052 00:16:43.052 ' 00:16:43.052 13:01:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:43.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.052 --rc genhtml_branch_coverage=1 00:16:43.052 --rc genhtml_function_coverage=1 00:16:43.052 --rc genhtml_legend=1 00:16:43.052 --rc geninfo_all_blocks=1 00:16:43.052 --rc geninfo_unexecuted_blocks=1 00:16:43.052 00:16:43.052 ' 00:16:43.052 13:01:23 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.052 13:01:23 -- nvmf/common.sh@7 -- # uname -s 00:16:43.052 13:01:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.052 13:01:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.052 13:01:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.052 13:01:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.052 13:01:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.052 13:01:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.052 13:01:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.053 13:01:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.053 13:01:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.053 13:01:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.053 13:01:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:43.053 13:01:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:16:43.053 13:01:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.053 13:01:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.053 13:01:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.053 13:01:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.053 13:01:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.053 13:01:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.053 13:01:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.053 13:01:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.053 13:01:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.053 13:01:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.053 13:01:23 -- paths/export.sh@5 -- # export PATH 00:16:43.053 13:01:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.053 13:01:23 -- nvmf/common.sh@46 -- # : 0 00:16:43.053 13:01:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:43.053 13:01:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:43.053 13:01:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:43.053 13:01:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.053 13:01:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.053 13:01:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:43.053 13:01:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:43.053 13:01:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:43.053 13:01:23 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.053 13:01:23 -- target/tls.sh@71 -- # nvmftestinit 00:16:43.053 13:01:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:43.053 13:01:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.053 13:01:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:43.053 13:01:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:43.053 13:01:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:43.053 13:01:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.053 13:01:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.053 13:01:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.053 13:01:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:43.053 13:01:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:43.053 13:01:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:43.053 13:01:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:43.053 13:01:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:43.053 13:01:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:43.053 13:01:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.053 13:01:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.053 13:01:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.053 13:01:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:43.053 13:01:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.053 13:01:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.053 13:01:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.053 13:01:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.053 13:01:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.053 13:01:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.053 13:01:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.053 13:01:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.053 13:01:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:43.311 13:01:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:43.311 Cannot find device "nvmf_tgt_br" 00:16:43.311 13:01:23 -- nvmf/common.sh@154 -- # true 00:16:43.312 13:01:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.312 Cannot find device "nvmf_tgt_br2" 00:16:43.312 13:01:23 -- nvmf/common.sh@155 -- # true 00:16:43.312 13:01:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:43.312 13:01:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:43.312 Cannot find device "nvmf_tgt_br" 00:16:43.312 13:01:23 -- nvmf/common.sh@157 -- # true 00:16:43.312 13:01:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:43.312 Cannot find device "nvmf_tgt_br2" 00:16:43.312 13:01:23 -- nvmf/common.sh@158 -- # true 00:16:43.312 13:01:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:43.312 13:01:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:43.312 13:01:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.312 13:01:23 -- nvmf/common.sh@161 -- # true 00:16:43.312 13:01:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.312 13:01:23 -- nvmf/common.sh@162 -- # true 00:16:43.312 13:01:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.312 13:01:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.312 13:01:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.312 13:01:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.312 13:01:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.312 13:01:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.312 13:01:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.312 13:01:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.312 13:01:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.312 13:01:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:43.312 13:01:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:43.312 13:01:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:43.312 13:01:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:43.312 13:01:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.312 13:01:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.312 13:01:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.312 13:01:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:43.312 13:01:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:43.312 13:01:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.312 13:01:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.570 13:01:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.570 13:01:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.571 13:01:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.571 13:01:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:43.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:43.571 00:16:43.571 --- 10.0.0.2 ping statistics --- 00:16:43.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.571 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:43.571 13:01:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:43.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:16:43.571 00:16:43.571 --- 10.0.0.3 ping statistics --- 00:16:43.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.571 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:43.571 13:01:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:43.571 00:16:43.571 --- 10.0.0.1 ping statistics --- 00:16:43.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.571 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:43.571 13:01:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.571 13:01:24 -- nvmf/common.sh@421 -- # return 0 00:16:43.571 13:01:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:43.571 13:01:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.571 13:01:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:43.571 13:01:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:43.571 13:01:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.571 13:01:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:43.571 13:01:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:43.571 13:01:24 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:43.571 13:01:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:43.571 13:01:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.571 13:01:24 -- common/autotest_common.sh@10 -- # set +x 00:16:43.571 13:01:24 -- nvmf/common.sh@469 -- # nvmfpid=87978 00:16:43.571 13:01:24 -- nvmf/common.sh@470 -- # waitforlisten 87978 00:16:43.571 13:01:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:43.571 13:01:24 -- common/autotest_common.sh@829 -- # '[' -z 87978 ']' 00:16:43.571 13:01:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.571 13:01:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.571 13:01:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.571 13:01:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.571 13:01:24 -- common/autotest_common.sh@10 -- # set +x 00:16:43.571 [2024-12-13 13:01:24.217521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:43.571 [2024-12-13 13:01:24.217608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.829 [2024-12-13 13:01:24.356717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.830 [2024-12-13 13:01:24.427071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:43.830 [2024-12-13 13:01:24.427240] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.830 [2024-12-13 13:01:24.427261] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.830 [2024-12-13 13:01:24.427272] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.830 [2024-12-13 13:01:24.427311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.830 13:01:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.830 13:01:24 -- common/autotest_common.sh@862 -- # return 0 00:16:43.830 13:01:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:43.830 13:01:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:43.830 13:01:24 -- common/autotest_common.sh@10 -- # set +x 00:16:43.830 13:01:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.830 13:01:24 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:43.830 13:01:24 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:44.088 true 00:16:44.088 13:01:24 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:44.088 13:01:24 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:44.347 13:01:25 -- target/tls.sh@82 -- # version=0 00:16:44.347 13:01:25 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:44.347 13:01:25 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:44.605 13:01:25 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:44.605 13:01:25 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:44.862 13:01:25 -- target/tls.sh@90 -- # version=13 00:16:44.862 13:01:25 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:44.862 13:01:25 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:45.121 13:01:25 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:45.121 13:01:25 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:45.379 13:01:26 -- target/tls.sh@98 -- # version=7 00:16:45.379 13:01:26 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:45.379 13:01:26 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:45.379 13:01:26 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:45.637 13:01:26 -- target/tls.sh@105 -- # ktls=false 00:16:45.637 13:01:26 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:45.637 13:01:26 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:45.895 13:01:26 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:45.895 13:01:26 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:46.154 13:01:26 -- target/tls.sh@113 -- # ktls=true 00:16:46.154 13:01:26 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:46.154 13:01:26 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:46.154 13:01:26 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:46.154 13:01:26 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:46.412 13:01:27 -- target/tls.sh@121 -- # ktls=false 00:16:46.413 13:01:27 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:46.413 13:01:27 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:46.413 13:01:27 -- target/tls.sh@49 -- # local key hash crc 00:16:46.413 13:01:27 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:46.413 13:01:27 -- target/tls.sh@51 -- # hash=01 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # gzip -1 -c 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # tail -c8 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # head -c 4 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # crc='p$H�' 00:16:46.413 13:01:27 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:46.413 13:01:27 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:46.413 13:01:27 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:46.413 13:01:27 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:46.413 13:01:27 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:46.413 13:01:27 -- target/tls.sh@49 -- # local key hash crc 00:16:46.413 13:01:27 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:46.413 13:01:27 -- target/tls.sh@51 -- # hash=01 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # gzip -1 -c 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # head -c 4 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # tail -c8 00:16:46.413 13:01:27 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:46.413 13:01:27 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:46.413 13:01:27 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:46.413 13:01:27 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:46.413 13:01:27 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:46.413 13:01:27 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.413 13:01:27 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:46.413 13:01:27 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:46.413 13:01:27 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:46.413 13:01:27 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.413 13:01:27 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:46.413 13:01:27 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:46.671 13:01:27 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:46.931 13:01:27 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.931 13:01:27 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.931 13:01:27 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:47.189 [2024-12-13 13:01:27.868984] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.189 13:01:27 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:47.448 13:01:28 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:47.706 [2024-12-13 13:01:28.417140] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:47.706 [2024-12-13 13:01:28.417402] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.706 13:01:28 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:47.965 malloc0 00:16:47.965 13:01:28 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:48.223 13:01:28 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:48.481 13:01:29 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:00.682 Initializing NVMe Controllers 00:17:00.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:00.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:00.682 Initialization complete. Launching workers. 00:17:00.682 ======================================================== 00:17:00.682 Latency(us) 00:17:00.682 Device Information : IOPS MiB/s Average min max 00:17:00.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11633.70 45.44 5502.27 1444.11 7676.83 00:17:00.682 ======================================================== 00:17:00.682 Total : 11633.70 45.44 5502.27 1444.11 7676.83 00:17:00.682 00:17:00.682 13:01:39 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:00.682 13:01:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:00.682 13:01:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:00.682 13:01:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:00.682 13:01:39 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:00.682 13:01:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.682 13:01:39 -- target/tls.sh@28 -- # bdevperf_pid=88332 00:17:00.682 13:01:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.682 13:01:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.682 13:01:39 -- target/tls.sh@31 -- # waitforlisten 88332 /var/tmp/bdevperf.sock 00:17:00.682 13:01:39 -- common/autotest_common.sh@829 -- # '[' -z 88332 ']' 00:17:00.682 13:01:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.682 13:01:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.682 13:01:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.682 13:01:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.682 13:01:39 -- common/autotest_common.sh@10 -- # set +x 00:17:00.682 [2024-12-13 13:01:39.311118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:00.682 [2024-12-13 13:01:39.311234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88332 ] 00:17:00.682 [2024-12-13 13:01:39.449509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.682 [2024-12-13 13:01:39.522657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.682 13:01:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.682 13:01:40 -- common/autotest_common.sh@862 -- # return 0 00:17:00.682 13:01:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:00.682 [2024-12-13 13:01:40.529961] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.682 TLSTESTn1 00:17:00.682 13:01:40 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:00.682 Running I/O for 10 seconds... 00:17:10.655 00:17:10.655 Latency(us) 00:17:10.655 [2024-12-13T13:01:51.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.655 [2024-12-13T13:01:51.431Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:10.655 Verification LBA range: start 0x0 length 0x2000 00:17:10.655 TLSTESTn1 : 10.02 6169.58 24.10 0.00 0.00 20712.68 5362.04 21805.61 00:17:10.655 [2024-12-13T13:01:51.431Z] =================================================================================================================== 00:17:10.655 [2024-12-13T13:01:51.431Z] Total : 6169.58 24.10 0.00 0.00 20712.68 5362.04 21805.61 00:17:10.655 0 00:17:10.655 13:01:50 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.655 13:01:50 -- target/tls.sh@45 -- # killprocess 88332 00:17:10.655 13:01:50 -- common/autotest_common.sh@936 -- # '[' -z 88332 ']' 00:17:10.655 13:01:50 -- common/autotest_common.sh@940 -- # kill -0 88332 00:17:10.655 13:01:50 -- common/autotest_common.sh@941 -- # uname 00:17:10.655 13:01:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.655 13:01:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88332 00:17:10.655 killing process with pid 88332 00:17:10.655 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.655 00:17:10.655 Latency(us) 00:17:10.655 [2024-12-13T13:01:51.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.655 [2024-12-13T13:01:51.431Z] =================================================================================================================== 00:17:10.655 [2024-12-13T13:01:51.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.655 13:01:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:10.655 13:01:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:10.655 13:01:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88332' 00:17:10.655 13:01:50 -- common/autotest_common.sh@955 -- # kill 88332 00:17:10.655 13:01:50 -- common/autotest_common.sh@960 -- # wait 88332 00:17:10.655 13:01:50 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:10.655 13:01:50 -- common/autotest_common.sh@650 -- # local es=0 00:17:10.655 13:01:50 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:10.655 13:01:50 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:10.655 13:01:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.655 13:01:50 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:10.655 13:01:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.655 13:01:50 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:10.655 13:01:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.655 13:01:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:10.655 13:01:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.655 13:01:50 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:10.655 13:01:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.655 13:01:50 -- target/tls.sh@28 -- # bdevperf_pid=88484 00:17:10.655 13:01:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.655 13:01:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.655 13:01:50 -- target/tls.sh@31 -- # waitforlisten 88484 /var/tmp/bdevperf.sock 00:17:10.655 13:01:50 -- common/autotest_common.sh@829 -- # '[' -z 88484 ']' 00:17:10.655 13:01:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.655 13:01:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.655 13:01:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.655 13:01:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.655 13:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:10.655 [2024-12-13 13:01:51.026283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:10.655 [2024-12-13 13:01:51.027106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88484 ] 00:17:10.655 [2024-12-13 13:01:51.163167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.655 [2024-12-13 13:01:51.228688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.591 13:01:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.591 13:01:52 -- common/autotest_common.sh@862 -- # return 0 00:17:11.591 13:01:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:11.591 [2024-12-13 13:01:52.277702] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.591 [2024-12-13 13:01:52.282559] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:11.591 [2024-12-13 13:01:52.283205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169b7c0 (107): Transport endpoint is not connected 00:17:11.591 [2024-12-13 13:01:52.284192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169b7c0 (9): Bad file descriptor 00:17:11.591 [2024-12-13 13:01:52.285188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:11.591 [2024-12-13 13:01:52.285220] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:11.591 [2024-12-13 13:01:52.285229] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:11.591 2024/12/13 13:01:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:11.591 request: 00:17:11.591 { 00:17:11.591 "method": "bdev_nvme_attach_controller", 00:17:11.591 "params": { 00:17:11.591 "name": "TLSTEST", 00:17:11.591 "trtype": "tcp", 00:17:11.591 "traddr": "10.0.0.2", 00:17:11.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.591 "adrfam": "ipv4", 00:17:11.591 "trsvcid": "4420", 00:17:11.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.591 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:11.591 } 00:17:11.591 } 00:17:11.591 Got JSON-RPC error response 00:17:11.591 GoRPCClient: error on JSON-RPC call 00:17:11.591 13:01:52 -- target/tls.sh@36 -- # killprocess 88484 00:17:11.591 13:01:52 -- common/autotest_common.sh@936 -- # '[' -z 88484 ']' 00:17:11.591 13:01:52 -- common/autotest_common.sh@940 -- # kill -0 88484 00:17:11.591 13:01:52 -- common/autotest_common.sh@941 -- # uname 00:17:11.591 13:01:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.591 13:01:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88484 00:17:11.591 killing process with pid 88484 00:17:11.591 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.591 00:17:11.591 Latency(us) 00:17:11.591 [2024-12-13T13:01:52.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.591 [2024-12-13T13:01:52.367Z] =================================================================================================================== 00:17:11.591 [2024-12-13T13:01:52.367Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.591 13:01:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:11.591 13:01:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:11.591 13:01:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88484' 00:17:11.591 13:01:52 -- common/autotest_common.sh@955 -- # kill 88484 00:17:11.591 13:01:52 -- common/autotest_common.sh@960 -- # wait 88484 00:17:11.850 13:01:52 -- target/tls.sh@37 -- # return 1 00:17:11.850 13:01:52 -- common/autotest_common.sh@653 -- # es=1 00:17:11.850 13:01:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:11.850 13:01:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:11.850 13:01:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:11.850 13:01:52 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:11.850 13:01:52 -- common/autotest_common.sh@650 -- # local es=0 00:17:11.850 13:01:52 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:11.850 13:01:52 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:11.850 13:01:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.850 13:01:52 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:11.850 13:01:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.850 13:01:52 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:11.850 13:01:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:11.850 13:01:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:11.850 13:01:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:11.850 13:01:52 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:11.850 13:01:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.850 13:01:52 -- target/tls.sh@28 -- # bdevperf_pid=88531 00:17:11.850 13:01:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.850 13:01:52 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.850 13:01:52 -- target/tls.sh@31 -- # waitforlisten 88531 /var/tmp/bdevperf.sock 00:17:11.850 13:01:52 -- common/autotest_common.sh@829 -- # '[' -z 88531 ']' 00:17:11.850 13:01:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.850 13:01:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.850 13:01:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.850 13:01:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.850 13:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:11.850 [2024-12-13 13:01:52.565034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:11.850 [2024-12-13 13:01:52.565275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88531 ] 00:17:12.109 [2024-12-13 13:01:52.694096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.109 [2024-12-13 13:01:52.756971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.044 13:01:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.044 13:01:53 -- common/autotest_common.sh@862 -- # return 0 00:17:13.044 13:01:53 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:13.044 [2024-12-13 13:01:53.782248] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.044 [2024-12-13 13:01:53.790280] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:13.044 [2024-12-13 13:01:53.790333] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:13.044 [2024-12-13 13:01:53.790399] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:13.044 [2024-12-13 13:01:53.790644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7f7c0 (107): Transport endpoint is not connected 00:17:13.044 [2024-12-13 13:01:53.791634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7f7c0 (9): Bad file descriptor 00:17:13.044 [2024-12-13 13:01:53.792631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:13.044 [2024-12-13 13:01:53.792664] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:13.044 [2024-12-13 13:01:53.792688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:13.044 2024/12/13 13:01:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:13.044 request: 00:17:13.044 { 00:17:13.044 "method": "bdev_nvme_attach_controller", 00:17:13.044 "params": { 00:17:13.044 "name": "TLSTEST", 00:17:13.044 "trtype": "tcp", 00:17:13.044 "traddr": "10.0.0.2", 00:17:13.044 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:13.044 "adrfam": "ipv4", 00:17:13.044 "trsvcid": "4420", 00:17:13.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.044 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:13.044 } 00:17:13.044 } 00:17:13.044 Got JSON-RPC error response 00:17:13.044 GoRPCClient: error on JSON-RPC call 00:17:13.044 13:01:53 -- target/tls.sh@36 -- # killprocess 88531 00:17:13.044 13:01:53 -- common/autotest_common.sh@936 -- # '[' -z 88531 ']' 00:17:13.044 13:01:53 -- common/autotest_common.sh@940 -- # kill -0 88531 00:17:13.044 13:01:53 -- common/autotest_common.sh@941 -- # uname 00:17:13.044 13:01:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.044 13:01:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88531 00:17:13.303 killing process with pid 88531 00:17:13.303 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.303 00:17:13.303 Latency(us) 00:17:13.303 [2024-12-13T13:01:54.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.303 [2024-12-13T13:01:54.079Z] =================================================================================================================== 00:17:13.303 [2024-12-13T13:01:54.079Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:13.303 13:01:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:13.303 13:01:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:13.303 13:01:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88531' 00:17:13.303 13:01:53 -- common/autotest_common.sh@955 -- # kill 88531 00:17:13.303 13:01:53 -- common/autotest_common.sh@960 -- # wait 88531 00:17:13.303 13:01:54 -- target/tls.sh@37 -- # return 1 00:17:13.303 13:01:54 -- common/autotest_common.sh@653 -- # es=1 00:17:13.303 13:01:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.303 13:01:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.303 13:01:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.303 13:01:54 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:13.303 13:01:54 -- common/autotest_common.sh@650 -- # local es=0 00:17:13.303 13:01:54 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:13.303 13:01:54 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:13.303 13:01:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.303 13:01:54 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:13.303 13:01:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.303 13:01:54 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:13.303 13:01:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.303 13:01:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:13.303 13:01:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.303 13:01:54 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:13.303 13:01:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.303 13:01:54 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.303 13:01:54 -- target/tls.sh@28 -- # bdevperf_pid=88577 00:17:13.303 13:01:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.303 13:01:54 -- target/tls.sh@31 -- # waitforlisten 88577 /var/tmp/bdevperf.sock 00:17:13.303 13:01:54 -- common/autotest_common.sh@829 -- # '[' -z 88577 ']' 00:17:13.303 13:01:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.303 13:01:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.303 13:01:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.303 13:01:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.303 13:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:13.303 [2024-12-13 13:01:54.064968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:13.303 [2024-12-13 13:01:54.065202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88577 ] 00:17:13.562 [2024-12-13 13:01:54.197081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.562 [2024-12-13 13:01:54.261260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.503 13:01:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.503 13:01:54 -- common/autotest_common.sh@862 -- # return 0 00:17:14.503 13:01:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:14.504 [2024-12-13 13:01:55.153623] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.504 [2024-12-13 13:01:55.160525] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:14.504 [2024-12-13 13:01:55.160563] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:14.504 [2024-12-13 13:01:55.160627] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:14.504 [2024-12-13 13:01:55.161280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e07c0 (107): Transport endpoint is not connected 00:17:14.504 [2024-12-13 13:01:55.162272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e07c0 (9): Bad file descriptor 00:17:14.504 [2024-12-13 13:01:55.163270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:14.504 [2024-12-13 13:01:55.163511] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:14.504 [2024-12-13 13:01:55.163542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:14.504 2024/12/13 13:01:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:14.504 request: 00:17:14.504 { 00:17:14.504 "method": "bdev_nvme_attach_controller", 00:17:14.504 "params": { 00:17:14.504 "name": "TLSTEST", 00:17:14.504 "trtype": "tcp", 00:17:14.504 "traddr": "10.0.0.2", 00:17:14.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.504 "adrfam": "ipv4", 00:17:14.504 "trsvcid": "4420", 00:17:14.504 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:14.504 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:14.504 } 00:17:14.504 } 00:17:14.504 Got JSON-RPC error response 00:17:14.504 GoRPCClient: error on JSON-RPC call 00:17:14.504 13:01:55 -- target/tls.sh@36 -- # killprocess 88577 00:17:14.504 13:01:55 -- common/autotest_common.sh@936 -- # '[' -z 88577 ']' 00:17:14.504 13:01:55 -- common/autotest_common.sh@940 -- # kill -0 88577 00:17:14.504 13:01:55 -- common/autotest_common.sh@941 -- # uname 00:17:14.504 13:01:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.504 13:01:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88577 00:17:14.504 13:01:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:14.504 13:01:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:14.504 13:01:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88577' 00:17:14.504 killing process with pid 88577 00:17:14.504 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.504 00:17:14.504 Latency(us) 00:17:14.504 [2024-12-13T13:01:55.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.504 [2024-12-13T13:01:55.280Z] =================================================================================================================== 00:17:14.504 [2024-12-13T13:01:55.280Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.504 13:01:55 -- common/autotest_common.sh@955 -- # kill 88577 00:17:14.504 13:01:55 -- common/autotest_common.sh@960 -- # wait 88577 00:17:14.764 13:01:55 -- target/tls.sh@37 -- # return 1 00:17:14.764 13:01:55 -- common/autotest_common.sh@653 -- # es=1 00:17:14.764 13:01:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.764 13:01:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.764 13:01:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.764 13:01:55 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:14.764 13:01:55 -- common/autotest_common.sh@650 -- # local es=0 00:17:14.764 13:01:55 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:14.764 13:01:55 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:14.764 13:01:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.764 13:01:55 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:14.764 13:01:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.764 13:01:55 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:14.764 13:01:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:14.764 13:01:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:14.764 13:01:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:14.764 13:01:55 -- target/tls.sh@23 -- # psk= 00:17:14.764 13:01:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:14.764 13:01:55 -- target/tls.sh@28 -- # bdevperf_pid=88618 00:17:14.764 13:01:55 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:14.764 13:01:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:14.764 13:01:55 -- target/tls.sh@31 -- # waitforlisten 88618 /var/tmp/bdevperf.sock 00:17:14.764 13:01:55 -- common/autotest_common.sh@829 -- # '[' -z 88618 ']' 00:17:14.764 13:01:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.764 13:01:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.764 13:01:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.764 13:01:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.764 13:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:14.764 [2024-12-13 13:01:55.452218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:14.764 [2024-12-13 13:01:55.452324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88618 ] 00:17:15.023 [2024-12-13 13:01:55.585393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.023 [2024-12-13 13:01:55.646062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.958 13:01:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.958 13:01:56 -- common/autotest_common.sh@862 -- # return 0 00:17:15.958 13:01:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:15.958 [2024-12-13 13:01:56.630666] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:15.958 [2024-12-13 13:01:56.632476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22df090 (9): Bad file descriptor 00:17:15.958 [2024-12-13 13:01:56.633471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:15.958 [2024-12-13 13:01:56.633505] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:15.958 [2024-12-13 13:01:56.633526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:15.958 2024/12/13 13:01:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:15.958 request: 00:17:15.958 { 00:17:15.958 "method": "bdev_nvme_attach_controller", 00:17:15.958 "params": { 00:17:15.958 "name": "TLSTEST", 00:17:15.958 "trtype": "tcp", 00:17:15.958 "traddr": "10.0.0.2", 00:17:15.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.958 "adrfam": "ipv4", 00:17:15.958 "trsvcid": "4420", 00:17:15.958 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:15.958 } 00:17:15.958 } 00:17:15.958 Got JSON-RPC error response 00:17:15.958 GoRPCClient: error on JSON-RPC call 00:17:15.958 13:01:56 -- target/tls.sh@36 -- # killprocess 88618 00:17:15.958 13:01:56 -- common/autotest_common.sh@936 -- # '[' -z 88618 ']' 00:17:15.958 13:01:56 -- common/autotest_common.sh@940 -- # kill -0 88618 00:17:15.958 13:01:56 -- common/autotest_common.sh@941 -- # uname 00:17:15.958 13:01:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.958 13:01:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88618 00:17:15.958 13:01:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:15.958 13:01:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:15.958 killing process with pid 88618 00:17:15.958 13:01:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88618' 00:17:15.958 13:01:56 -- common/autotest_common.sh@955 -- # kill 88618 00:17:15.958 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.958 00:17:15.958 Latency(us) 00:17:15.958 [2024-12-13T13:01:56.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.958 [2024-12-13T13:01:56.734Z] =================================================================================================================== 00:17:15.958 [2024-12-13T13:01:56.734Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:15.958 13:01:56 -- common/autotest_common.sh@960 -- # wait 88618 00:17:16.217 13:01:56 -- target/tls.sh@37 -- # return 1 00:17:16.217 13:01:56 -- common/autotest_common.sh@653 -- # es=1 00:17:16.217 13:01:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.217 13:01:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.217 13:01:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.217 13:01:56 -- target/tls.sh@167 -- # killprocess 87978 00:17:16.217 13:01:56 -- common/autotest_common.sh@936 -- # '[' -z 87978 ']' 00:17:16.217 13:01:56 -- common/autotest_common.sh@940 -- # kill -0 87978 00:17:16.217 13:01:56 -- common/autotest_common.sh@941 -- # uname 00:17:16.217 13:01:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.217 13:01:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87978 00:17:16.217 13:01:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:16.217 13:01:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:16.217 killing process with pid 87978 00:17:16.217 13:01:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87978' 00:17:16.217 13:01:56 -- common/autotest_common.sh@955 -- # kill 87978 00:17:16.217 13:01:56 -- common/autotest_common.sh@960 -- # wait 87978 00:17:16.476 13:01:57 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:16.476 13:01:57 -- target/tls.sh@49 -- # local key hash crc 00:17:16.476 13:01:57 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:16.476 13:01:57 -- target/tls.sh@51 -- # hash=02 00:17:16.476 13:01:57 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:16.476 13:01:57 -- target/tls.sh@52 -- # gzip -1 -c 00:17:16.476 13:01:57 -- target/tls.sh@52 -- # tail -c8 00:17:16.476 13:01:57 -- target/tls.sh@52 -- # head -c 4 00:17:16.476 13:01:57 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:16.476 13:01:57 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:16.476 13:01:57 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:16.476 13:01:57 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:16.476 13:01:57 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:16.476 13:01:57 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.476 13:01:57 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:16.476 13:01:57 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.476 13:01:57 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:16.476 13:01:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:16.476 13:01:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.476 13:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:16.476 13:01:57 -- nvmf/common.sh@469 -- # nvmfpid=88684 00:17:16.476 13:01:57 -- nvmf/common.sh@470 -- # waitforlisten 88684 00:17:16.476 13:01:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:16.476 13:01:57 -- common/autotest_common.sh@829 -- # '[' -z 88684 ']' 00:17:16.476 13:01:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.476 13:01:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.476 13:01:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.476 13:01:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.476 13:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:16.476 [2024-12-13 13:01:57.165651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:16.476 [2024-12-13 13:01:57.165736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.735 [2024-12-13 13:01:57.298641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.735 [2024-12-13 13:01:57.354612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:16.735 [2024-12-13 13:01:57.354811] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.735 [2024-12-13 13:01:57.354825] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.735 [2024-12-13 13:01:57.354834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.735 [2024-12-13 13:01:57.354858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.702 13:01:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.702 13:01:58 -- common/autotest_common.sh@862 -- # return 0 00:17:17.702 13:01:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:17.702 13:01:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:17.702 13:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:17.702 13:01:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.702 13:01:58 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.702 13:01:58 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.702 13:01:58 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:17.960 [2024-12-13 13:01:58.485190] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.960 13:01:58 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:18.219 13:01:58 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:18.219 [2024-12-13 13:01:58.965230] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:18.219 [2024-12-13 13:01:58.965491] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.219 13:01:58 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:18.478 malloc0 00:17:18.478 13:01:59 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:18.736 13:01:59 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:18.995 13:01:59 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:18.995 13:01:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.995 13:01:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.995 13:01:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.995 13:01:59 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:18.995 13:01:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.995 13:01:59 -- target/tls.sh@28 -- # bdevperf_pid=88781 00:17:18.995 13:01:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.995 13:01:59 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.995 13:01:59 -- target/tls.sh@31 -- # waitforlisten 88781 /var/tmp/bdevperf.sock 00:17:18.995 13:01:59 -- common/autotest_common.sh@829 -- # '[' -z 88781 ']' 00:17:18.995 13:01:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.995 13:01:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.995 13:01:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.995 13:01:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.995 13:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:18.995 [2024-12-13 13:01:59.667299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:18.995 [2024-12-13 13:01:59.667393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88781 ] 00:17:19.254 [2024-12-13 13:01:59.802665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.254 [2024-12-13 13:01:59.873092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.190 13:02:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.190 13:02:00 -- common/autotest_common.sh@862 -- # return 0 00:17:20.190 13:02:00 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.190 [2024-12-13 13:02:00.790734] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.190 TLSTESTn1 00:17:20.190 13:02:00 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:20.448 Running I/O for 10 seconds... 00:17:30.424 00:17:30.424 Latency(us) 00:17:30.424 [2024-12-13T13:02:11.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.424 [2024-12-13T13:02:11.200Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:30.424 Verification LBA range: start 0x0 length 0x2000 00:17:30.424 TLSTESTn1 : 10.02 6138.91 23.98 0.00 0.00 20816.07 4796.04 17515.99 00:17:30.424 [2024-12-13T13:02:11.200Z] =================================================================================================================== 00:17:30.424 [2024-12-13T13:02:11.200Z] Total : 6138.91 23.98 0.00 0.00 20816.07 4796.04 17515.99 00:17:30.424 0 00:17:30.424 13:02:11 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:30.424 13:02:11 -- target/tls.sh@45 -- # killprocess 88781 00:17:30.424 13:02:11 -- common/autotest_common.sh@936 -- # '[' -z 88781 ']' 00:17:30.424 13:02:11 -- common/autotest_common.sh@940 -- # kill -0 88781 00:17:30.424 13:02:11 -- common/autotest_common.sh@941 -- # uname 00:17:30.424 13:02:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.424 13:02:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88781 00:17:30.424 13:02:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:30.424 13:02:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:30.424 killing process with pid 88781 00:17:30.424 13:02:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88781' 00:17:30.424 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.424 00:17:30.424 Latency(us) 00:17:30.424 [2024-12-13T13:02:11.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.424 [2024-12-13T13:02:11.200Z] =================================================================================================================== 00:17:30.424 [2024-12-13T13:02:11.200Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.424 13:02:11 -- common/autotest_common.sh@955 -- # kill 88781 00:17:30.424 13:02:11 -- common/autotest_common.sh@960 -- # wait 88781 00:17:30.684 13:02:11 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:30.684 13:02:11 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:30.684 13:02:11 -- common/autotest_common.sh@650 -- # local es=0 00:17:30.684 13:02:11 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:30.684 13:02:11 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:30.684 13:02:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.684 13:02:11 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:30.684 13:02:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.684 13:02:11 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:30.684 13:02:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:30.684 13:02:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:30.684 13:02:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:30.684 13:02:11 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:30.684 13:02:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.684 13:02:11 -- target/tls.sh@28 -- # bdevperf_pid=88928 00:17:30.684 13:02:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.684 13:02:11 -- target/tls.sh@31 -- # waitforlisten 88928 /var/tmp/bdevperf.sock 00:17:30.684 13:02:11 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.684 13:02:11 -- common/autotest_common.sh@829 -- # '[' -z 88928 ']' 00:17:30.684 13:02:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.684 13:02:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.684 13:02:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.684 13:02:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.684 13:02:11 -- common/autotest_common.sh@10 -- # set +x 00:17:30.684 [2024-12-13 13:02:11.329298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:30.684 [2024-12-13 13:02:11.329388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88928 ] 00:17:30.943 [2024-12-13 13:02:11.467714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.943 [2024-12-13 13:02:11.532354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.879 13:02:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.879 13:02:12 -- common/autotest_common.sh@862 -- # return 0 00:17:31.879 13:02:12 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.879 [2024-12-13 13:02:12.536209] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.879 [2024-12-13 13:02:12.536421] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:31.879 2024/12/13 13:02:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.879 request: 00:17:31.879 { 00:17:31.879 "method": "bdev_nvme_attach_controller", 00:17:31.879 "params": { 00:17:31.879 "name": "TLSTEST", 00:17:31.879 "trtype": "tcp", 00:17:31.879 "traddr": "10.0.0.2", 00:17:31.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.879 "adrfam": "ipv4", 00:17:31.879 "trsvcid": "4420", 00:17:31.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.879 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:31.879 } 00:17:31.879 } 00:17:31.879 Got JSON-RPC error response 00:17:31.879 GoRPCClient: error on JSON-RPC call 00:17:31.879 13:02:12 -- target/tls.sh@36 -- # killprocess 88928 00:17:31.879 13:02:12 -- common/autotest_common.sh@936 -- # '[' -z 88928 ']' 00:17:31.879 13:02:12 -- common/autotest_common.sh@940 -- # kill -0 88928 00:17:31.879 13:02:12 -- common/autotest_common.sh@941 -- # uname 00:17:31.879 13:02:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.879 13:02:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88928 00:17:31.879 killing process with pid 88928 00:17:31.879 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.879 00:17:31.879 Latency(us) 00:17:31.879 [2024-12-13T13:02:12.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.879 [2024-12-13T13:02:12.655Z] =================================================================================================================== 00:17:31.879 [2024-12-13T13:02:12.655Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.879 13:02:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:31.879 13:02:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:31.879 13:02:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88928' 00:17:31.879 13:02:12 -- common/autotest_common.sh@955 -- # kill 88928 00:17:31.879 13:02:12 -- common/autotest_common.sh@960 -- # wait 88928 00:17:32.138 13:02:12 -- target/tls.sh@37 -- # return 1 00:17:32.138 13:02:12 -- common/autotest_common.sh@653 -- # es=1 00:17:32.138 13:02:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.138 13:02:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.138 13:02:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.138 13:02:12 -- target/tls.sh@183 -- # killprocess 88684 00:17:32.138 13:02:12 -- common/autotest_common.sh@936 -- # '[' -z 88684 ']' 00:17:32.138 13:02:12 -- common/autotest_common.sh@940 -- # kill -0 88684 00:17:32.138 13:02:12 -- common/autotest_common.sh@941 -- # uname 00:17:32.138 13:02:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.138 13:02:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88684 00:17:32.138 killing process with pid 88684 00:17:32.138 13:02:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:32.138 13:02:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:32.138 13:02:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88684' 00:17:32.138 13:02:12 -- common/autotest_common.sh@955 -- # kill 88684 00:17:32.138 13:02:12 -- common/autotest_common.sh@960 -- # wait 88684 00:17:32.397 13:02:13 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:32.397 13:02:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:32.397 13:02:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:32.397 13:02:13 -- common/autotest_common.sh@10 -- # set +x 00:17:32.397 13:02:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:32.397 13:02:13 -- nvmf/common.sh@469 -- # nvmfpid=88983 00:17:32.397 13:02:13 -- nvmf/common.sh@470 -- # waitforlisten 88983 00:17:32.397 13:02:13 -- common/autotest_common.sh@829 -- # '[' -z 88983 ']' 00:17:32.397 13:02:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.397 13:02:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.397 13:02:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.397 13:02:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.397 13:02:13 -- common/autotest_common.sh@10 -- # set +x 00:17:32.397 [2024-12-13 13:02:13.053944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:32.397 [2024-12-13 13:02:13.054055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.656 [2024-12-13 13:02:13.180080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.656 [2024-12-13 13:02:13.241611] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:32.656 [2024-12-13 13:02:13.241809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.656 [2024-12-13 13:02:13.241822] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.656 [2024-12-13 13:02:13.241830] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.656 [2024-12-13 13:02:13.241855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.594 13:02:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.594 13:02:14 -- common/autotest_common.sh@862 -- # return 0 00:17:33.594 13:02:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:33.594 13:02:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:33.594 13:02:14 -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 13:02:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.594 13:02:14 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.594 13:02:14 -- common/autotest_common.sh@650 -- # local es=0 00:17:33.594 13:02:14 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.594 13:02:14 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:33.594 13:02:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.594 13:02:14 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:33.594 13:02:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.594 13:02:14 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.594 13:02:14 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.594 13:02:14 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:33.853 [2024-12-13 13:02:14.375588] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.853 13:02:14 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:34.112 13:02:14 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:34.370 [2024-12-13 13:02:14.915685] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:34.370 [2024-12-13 13:02:14.915952] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.370 13:02:14 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:34.629 malloc0 00:17:34.629 13:02:15 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:34.887 13:02:15 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:34.887 [2024-12-13 13:02:15.639341] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:34.887 [2024-12-13 13:02:15.639438] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:34.887 [2024-12-13 13:02:15.639487] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:34.887 2024/12/13 13:02:15 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:34.887 request: 00:17:34.887 { 00:17:34.887 "method": "nvmf_subsystem_add_host", 00:17:34.887 "params": { 00:17:34.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.887 "host": "nqn.2016-06.io.spdk:host1", 00:17:34.887 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:34.887 } 00:17:34.887 } 00:17:34.887 Got JSON-RPC error response 00:17:34.887 GoRPCClient: error on JSON-RPC call 00:17:34.887 13:02:15 -- common/autotest_common.sh@653 -- # es=1 00:17:34.887 13:02:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.887 13:02:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.887 13:02:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.887 13:02:15 -- target/tls.sh@189 -- # killprocess 88983 00:17:34.887 13:02:15 -- common/autotest_common.sh@936 -- # '[' -z 88983 ']' 00:17:34.887 13:02:15 -- common/autotest_common.sh@940 -- # kill -0 88983 00:17:34.887 13:02:15 -- common/autotest_common.sh@941 -- # uname 00:17:35.145 13:02:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:35.146 13:02:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88983 00:17:35.146 13:02:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:35.146 13:02:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:35.146 killing process with pid 88983 00:17:35.146 13:02:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88983' 00:17:35.146 13:02:15 -- common/autotest_common.sh@955 -- # kill 88983 00:17:35.146 13:02:15 -- common/autotest_common.sh@960 -- # wait 88983 00:17:35.146 13:02:15 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:35.146 13:02:15 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:35.146 13:02:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:35.146 13:02:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.146 13:02:15 -- common/autotest_common.sh@10 -- # set +x 00:17:35.146 13:02:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:35.146 13:02:15 -- nvmf/common.sh@469 -- # nvmfpid=89095 00:17:35.146 13:02:15 -- nvmf/common.sh@470 -- # waitforlisten 89095 00:17:35.146 13:02:15 -- common/autotest_common.sh@829 -- # '[' -z 89095 ']' 00:17:35.146 13:02:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.146 13:02:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.146 13:02:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.146 13:02:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.146 13:02:15 -- common/autotest_common.sh@10 -- # set +x 00:17:35.405 [2024-12-13 13:02:15.937557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:35.405 [2024-12-13 13:02:15.937658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.405 [2024-12-13 13:02:16.061806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.405 [2024-12-13 13:02:16.119014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:35.405 [2024-12-13 13:02:16.119205] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.405 [2024-12-13 13:02:16.119227] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.405 [2024-12-13 13:02:16.119235] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.405 [2024-12-13 13:02:16.119266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.341 13:02:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.341 13:02:16 -- common/autotest_common.sh@862 -- # return 0 00:17:36.341 13:02:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:36.341 13:02:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.341 13:02:16 -- common/autotest_common.sh@10 -- # set +x 00:17:36.341 13:02:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.341 13:02:16 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.341 13:02:16 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.341 13:02:16 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:36.599 [2024-12-13 13:02:17.168656] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.599 13:02:17 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:36.858 13:02:17 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:37.117 [2024-12-13 13:02:17.700858] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:37.117 [2024-12-13 13:02:17.701162] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.117 13:02:17 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:37.375 malloc0 00:17:37.375 13:02:17 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:37.376 13:02:18 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:37.634 13:02:18 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.634 13:02:18 -- target/tls.sh@197 -- # bdevperf_pid=89192 00:17:37.634 13:02:18 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.634 13:02:18 -- target/tls.sh@200 -- # waitforlisten 89192 /var/tmp/bdevperf.sock 00:17:37.634 13:02:18 -- common/autotest_common.sh@829 -- # '[' -z 89192 ']' 00:17:37.634 13:02:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.634 13:02:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.634 13:02:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.635 13:02:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.635 13:02:18 -- common/autotest_common.sh@10 -- # set +x 00:17:37.635 [2024-12-13 13:02:18.374856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:37.635 [2024-12-13 13:02:18.374942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89192 ] 00:17:37.893 [2024-12-13 13:02:18.508138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.893 [2024-12-13 13:02:18.572375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.829 13:02:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.829 13:02:19 -- common/autotest_common.sh@862 -- # return 0 00:17:38.829 13:02:19 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.829 [2024-12-13 13:02:19.543234] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:39.088 TLSTESTn1 00:17:39.088 13:02:19 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:39.347 13:02:19 -- target/tls.sh@205 -- # tgtconf='{ 00:17:39.347 "subsystems": [ 00:17:39.347 { 00:17:39.347 "subsystem": "iobuf", 00:17:39.347 "config": [ 00:17:39.347 { 00:17:39.347 "method": "iobuf_set_options", 00:17:39.347 "params": { 00:17:39.347 "large_bufsize": 135168, 00:17:39.347 "large_pool_count": 1024, 00:17:39.347 "small_bufsize": 8192, 00:17:39.347 "small_pool_count": 8192 00:17:39.347 } 00:17:39.347 } 00:17:39.347 ] 00:17:39.347 }, 00:17:39.347 { 00:17:39.347 "subsystem": "sock", 00:17:39.347 "config": [ 00:17:39.347 { 00:17:39.347 "method": "sock_impl_set_options", 00:17:39.347 "params": { 00:17:39.347 "enable_ktls": false, 00:17:39.347 "enable_placement_id": 0, 00:17:39.347 "enable_quickack": false, 00:17:39.347 "enable_recv_pipe": true, 00:17:39.347 "enable_zerocopy_send_client": false, 00:17:39.347 "enable_zerocopy_send_server": true, 00:17:39.347 "impl_name": "posix", 00:17:39.347 "recv_buf_size": 2097152, 00:17:39.347 "send_buf_size": 2097152, 00:17:39.347 "tls_version": 0, 00:17:39.347 "zerocopy_threshold": 0 00:17:39.347 } 00:17:39.347 }, 00:17:39.347 { 00:17:39.347 "method": "sock_impl_set_options", 00:17:39.347 "params": { 00:17:39.347 "enable_ktls": false, 00:17:39.347 "enable_placement_id": 0, 00:17:39.347 "enable_quickack": false, 00:17:39.347 "enable_recv_pipe": true, 00:17:39.347 "enable_zerocopy_send_client": false, 00:17:39.347 "enable_zerocopy_send_server": true, 00:17:39.347 "impl_name": "ssl", 00:17:39.347 "recv_buf_size": 4096, 00:17:39.347 "send_buf_size": 4096, 00:17:39.347 "tls_version": 0, 00:17:39.347 "zerocopy_threshold": 0 00:17:39.347 } 00:17:39.347 } 00:17:39.347 ] 00:17:39.347 }, 00:17:39.347 { 00:17:39.347 "subsystem": "vmd", 00:17:39.347 "config": [] 00:17:39.347 }, 00:17:39.347 { 00:17:39.347 "subsystem": "accel", 00:17:39.347 "config": [ 00:17:39.348 { 00:17:39.348 "method": "accel_set_options", 00:17:39.348 "params": { 00:17:39.348 "buf_count": 2048, 00:17:39.348 "large_cache_size": 16, 00:17:39.348 "sequence_count": 2048, 00:17:39.348 "small_cache_size": 128, 00:17:39.348 "task_count": 2048 00:17:39.348 } 00:17:39.348 } 00:17:39.348 ] 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "subsystem": "bdev", 00:17:39.348 "config": [ 00:17:39.348 { 00:17:39.348 "method": "bdev_set_options", 00:17:39.348 "params": { 00:17:39.348 "bdev_auto_examine": true, 00:17:39.348 "bdev_io_cache_size": 256, 00:17:39.348 "bdev_io_pool_size": 65535, 00:17:39.348 "iobuf_large_cache_size": 16, 00:17:39.348 "iobuf_small_cache_size": 128 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "bdev_raid_set_options", 00:17:39.348 "params": { 00:17:39.348 "process_window_size_kb": 1024 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "bdev_iscsi_set_options", 00:17:39.348 "params": { 00:17:39.348 "timeout_sec": 30 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "bdev_nvme_set_options", 00:17:39.348 "params": { 00:17:39.348 "action_on_timeout": "none", 00:17:39.348 "allow_accel_sequence": false, 00:17:39.348 "arbitration_burst": 0, 00:17:39.348 "bdev_retry_count": 3, 00:17:39.348 "ctrlr_loss_timeout_sec": 0, 00:17:39.348 "delay_cmd_submit": true, 00:17:39.348 "fast_io_fail_timeout_sec": 0, 00:17:39.348 "generate_uuids": false, 00:17:39.348 "high_priority_weight": 0, 00:17:39.348 "io_path_stat": false, 00:17:39.348 "io_queue_requests": 0, 00:17:39.348 "keep_alive_timeout_ms": 10000, 00:17:39.348 "low_priority_weight": 0, 00:17:39.348 "medium_priority_weight": 0, 00:17:39.348 "nvme_adminq_poll_period_us": 10000, 00:17:39.348 "nvme_ioq_poll_period_us": 0, 00:17:39.348 "reconnect_delay_sec": 0, 00:17:39.348 "timeout_admin_us": 0, 00:17:39.348 "timeout_us": 0, 00:17:39.348 "transport_ack_timeout": 0, 00:17:39.348 "transport_retry_count": 4, 00:17:39.348 "transport_tos": 0 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "bdev_nvme_set_hotplug", 00:17:39.348 "params": { 00:17:39.348 "enable": false, 00:17:39.348 "period_us": 100000 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "bdev_malloc_create", 00:17:39.348 "params": { 00:17:39.348 "block_size": 4096, 00:17:39.348 "name": "malloc0", 00:17:39.348 "num_blocks": 8192, 00:17:39.348 "optimal_io_boundary": 0, 00:17:39.348 "physical_block_size": 4096, 00:17:39.348 "uuid": "d54e9b98-8f29-4930-adb1-73cdbc5f2b30" 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "bdev_wait_for_examine" 00:17:39.348 } 00:17:39.348 ] 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "subsystem": "nbd", 00:17:39.348 "config": [] 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "subsystem": "scheduler", 00:17:39.348 "config": [ 00:17:39.348 { 00:17:39.348 "method": "framework_set_scheduler", 00:17:39.348 "params": { 00:17:39.348 "name": "static" 00:17:39.348 } 00:17:39.348 } 00:17:39.348 ] 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "subsystem": "nvmf", 00:17:39.348 "config": [ 00:17:39.348 { 00:17:39.348 "method": "nvmf_set_config", 00:17:39.348 "params": { 00:17:39.348 "admin_cmd_passthru": { 00:17:39.348 "identify_ctrlr": false 00:17:39.348 }, 00:17:39.348 "discovery_filter": "match_any" 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "nvmf_set_max_subsystems", 00:17:39.348 "params": { 00:17:39.348 "max_subsystems": 1024 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "nvmf_set_crdt", 00:17:39.348 "params": { 00:17:39.348 "crdt1": 0, 00:17:39.348 "crdt2": 0, 00:17:39.348 "crdt3": 0 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "nvmf_create_transport", 00:17:39.348 "params": { 00:17:39.348 "abort_timeout_sec": 1, 00:17:39.348 "buf_cache_size": 4294967295, 00:17:39.348 "c2h_success": false, 00:17:39.348 "dif_insert_or_strip": false, 00:17:39.348 "in_capsule_data_size": 4096, 00:17:39.348 "io_unit_size": 131072, 00:17:39.348 "max_aq_depth": 128, 00:17:39.348 "max_io_qpairs_per_ctrlr": 127, 00:17:39.348 "max_io_size": 131072, 00:17:39.348 "max_queue_depth": 128, 00:17:39.348 "num_shared_buffers": 511, 00:17:39.348 "sock_priority": 0, 00:17:39.348 "trtype": "TCP", 00:17:39.348 "zcopy": false 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "nvmf_create_subsystem", 00:17:39.348 "params": { 00:17:39.348 "allow_any_host": false, 00:17:39.348 "ana_reporting": false, 00:17:39.348 "max_cntlid": 65519, 00:17:39.348 "max_namespaces": 10, 00:17:39.348 "min_cntlid": 1, 00:17:39.348 "model_number": "SPDK bdev Controller", 00:17:39.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.348 "serial_number": "SPDK00000000000001" 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "nvmf_subsystem_add_host", 00:17:39.348 "params": { 00:17:39.348 "host": "nqn.2016-06.io.spdk:host1", 00:17:39.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.348 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "nvmf_subsystem_add_ns", 00:17:39.348 "params": { 00:17:39.348 "namespace": { 00:17:39.348 "bdev_name": "malloc0", 00:17:39.348 "nguid": "D54E9B988F294930ADB173CDBC5F2B30", 00:17:39.348 "nsid": 1, 00:17:39.348 "uuid": "d54e9b98-8f29-4930-adb1-73cdbc5f2b30" 00:17:39.348 }, 00:17:39.348 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:39.348 } 00:17:39.348 }, 00:17:39.348 { 00:17:39.348 "method": "nvmf_subsystem_add_listener", 00:17:39.348 "params": { 00:17:39.348 "listen_address": { 00:17:39.348 "adrfam": "IPv4", 00:17:39.348 "traddr": "10.0.0.2", 00:17:39.348 "trsvcid": "4420", 00:17:39.348 "trtype": "TCP" 00:17:39.348 }, 00:17:39.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.348 "secure_channel": true 00:17:39.348 } 00:17:39.348 } 00:17:39.348 ] 00:17:39.348 } 00:17:39.348 ] 00:17:39.348 }' 00:17:39.348 13:02:19 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:39.607 13:02:20 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:39.607 "subsystems": [ 00:17:39.607 { 00:17:39.607 "subsystem": "iobuf", 00:17:39.607 "config": [ 00:17:39.607 { 00:17:39.607 "method": "iobuf_set_options", 00:17:39.607 "params": { 00:17:39.607 "large_bufsize": 135168, 00:17:39.607 "large_pool_count": 1024, 00:17:39.607 "small_bufsize": 8192, 00:17:39.607 "small_pool_count": 8192 00:17:39.607 } 00:17:39.607 } 00:17:39.607 ] 00:17:39.607 }, 00:17:39.607 { 00:17:39.607 "subsystem": "sock", 00:17:39.607 "config": [ 00:17:39.607 { 00:17:39.607 "method": "sock_impl_set_options", 00:17:39.607 "params": { 00:17:39.607 "enable_ktls": false, 00:17:39.607 "enable_placement_id": 0, 00:17:39.607 "enable_quickack": false, 00:17:39.607 "enable_recv_pipe": true, 00:17:39.607 "enable_zerocopy_send_client": false, 00:17:39.607 "enable_zerocopy_send_server": true, 00:17:39.607 "impl_name": "posix", 00:17:39.607 "recv_buf_size": 2097152, 00:17:39.607 "send_buf_size": 2097152, 00:17:39.607 "tls_version": 0, 00:17:39.607 "zerocopy_threshold": 0 00:17:39.607 } 00:17:39.607 }, 00:17:39.607 { 00:17:39.607 "method": "sock_impl_set_options", 00:17:39.607 "params": { 00:17:39.607 "enable_ktls": false, 00:17:39.607 "enable_placement_id": 0, 00:17:39.607 "enable_quickack": false, 00:17:39.607 "enable_recv_pipe": true, 00:17:39.607 "enable_zerocopy_send_client": false, 00:17:39.607 "enable_zerocopy_send_server": true, 00:17:39.607 "impl_name": "ssl", 00:17:39.607 "recv_buf_size": 4096, 00:17:39.607 "send_buf_size": 4096, 00:17:39.607 "tls_version": 0, 00:17:39.607 "zerocopy_threshold": 0 00:17:39.607 } 00:17:39.607 } 00:17:39.607 ] 00:17:39.607 }, 00:17:39.607 { 00:17:39.607 "subsystem": "vmd", 00:17:39.607 "config": [] 00:17:39.607 }, 00:17:39.607 { 00:17:39.607 "subsystem": "accel", 00:17:39.607 "config": [ 00:17:39.607 { 00:17:39.607 "method": "accel_set_options", 00:17:39.607 "params": { 00:17:39.607 "buf_count": 2048, 00:17:39.607 "large_cache_size": 16, 00:17:39.607 "sequence_count": 2048, 00:17:39.607 "small_cache_size": 128, 00:17:39.607 "task_count": 2048 00:17:39.607 } 00:17:39.607 } 00:17:39.607 ] 00:17:39.607 }, 00:17:39.607 { 00:17:39.607 "subsystem": "bdev", 00:17:39.607 "config": [ 00:17:39.607 { 00:17:39.607 "method": "bdev_set_options", 00:17:39.607 "params": { 00:17:39.608 "bdev_auto_examine": true, 00:17:39.608 "bdev_io_cache_size": 256, 00:17:39.608 "bdev_io_pool_size": 65535, 00:17:39.608 "iobuf_large_cache_size": 16, 00:17:39.608 "iobuf_small_cache_size": 128 00:17:39.608 } 00:17:39.608 }, 00:17:39.608 { 00:17:39.608 "method": "bdev_raid_set_options", 00:17:39.608 "params": { 00:17:39.608 "process_window_size_kb": 1024 00:17:39.608 } 00:17:39.608 }, 00:17:39.608 { 00:17:39.608 "method": "bdev_iscsi_set_options", 00:17:39.608 "params": { 00:17:39.608 "timeout_sec": 30 00:17:39.608 } 00:17:39.608 }, 00:17:39.608 { 00:17:39.608 "method": "bdev_nvme_set_options", 00:17:39.608 "params": { 00:17:39.608 "action_on_timeout": "none", 00:17:39.608 "allow_accel_sequence": false, 00:17:39.608 "arbitration_burst": 0, 00:17:39.608 "bdev_retry_count": 3, 00:17:39.608 "ctrlr_loss_timeout_sec": 0, 00:17:39.608 "delay_cmd_submit": true, 00:17:39.608 "fast_io_fail_timeout_sec": 0, 00:17:39.608 "generate_uuids": false, 00:17:39.608 "high_priority_weight": 0, 00:17:39.608 "io_path_stat": false, 00:17:39.608 "io_queue_requests": 512, 00:17:39.608 "keep_alive_timeout_ms": 10000, 00:17:39.608 "low_priority_weight": 0, 00:17:39.608 "medium_priority_weight": 0, 00:17:39.608 "nvme_adminq_poll_period_us": 10000, 00:17:39.608 "nvme_ioq_poll_period_us": 0, 00:17:39.608 "reconnect_delay_sec": 0, 00:17:39.608 "timeout_admin_us": 0, 00:17:39.608 "timeout_us": 0, 00:17:39.608 "transport_ack_timeout": 0, 00:17:39.608 "transport_retry_count": 4, 00:17:39.608 "transport_tos": 0 00:17:39.608 } 00:17:39.608 }, 00:17:39.608 { 00:17:39.608 "method": "bdev_nvme_attach_controller", 00:17:39.608 "params": { 00:17:39.608 "adrfam": "IPv4", 00:17:39.608 "ctrlr_loss_timeout_sec": 0, 00:17:39.608 "ddgst": false, 00:17:39.608 "fast_io_fail_timeout_sec": 0, 00:17:39.608 "hdgst": false, 00:17:39.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.608 "name": "TLSTEST", 00:17:39.608 "prchk_guard": false, 00:17:39.608 "prchk_reftag": false, 00:17:39.608 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:39.608 "reconnect_delay_sec": 0, 00:17:39.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.608 "traddr": "10.0.0.2", 00:17:39.608 "trsvcid": "4420", 00:17:39.608 "trtype": "TCP" 00:17:39.608 } 00:17:39.608 }, 00:17:39.608 { 00:17:39.608 "method": "bdev_nvme_set_hotplug", 00:17:39.608 "params": { 00:17:39.608 "enable": false, 00:17:39.608 "period_us": 100000 00:17:39.608 } 00:17:39.608 }, 00:17:39.608 { 00:17:39.608 "method": "bdev_wait_for_examine" 00:17:39.608 } 00:17:39.608 ] 00:17:39.608 }, 00:17:39.608 { 00:17:39.608 "subsystem": "nbd", 00:17:39.608 "config": [] 00:17:39.608 } 00:17:39.608 ] 00:17:39.608 }' 00:17:39.608 13:02:20 -- target/tls.sh@208 -- # killprocess 89192 00:17:39.608 13:02:20 -- common/autotest_common.sh@936 -- # '[' -z 89192 ']' 00:17:39.608 13:02:20 -- common/autotest_common.sh@940 -- # kill -0 89192 00:17:39.608 13:02:20 -- common/autotest_common.sh@941 -- # uname 00:17:39.608 13:02:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.608 13:02:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89192 00:17:39.608 13:02:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:39.608 13:02:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:39.608 killing process with pid 89192 00:17:39.608 13:02:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89192' 00:17:39.608 13:02:20 -- common/autotest_common.sh@955 -- # kill 89192 00:17:39.608 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.608 00:17:39.608 Latency(us) 00:17:39.608 [2024-12-13T13:02:20.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.608 [2024-12-13T13:02:20.384Z] =================================================================================================================== 00:17:39.608 [2024-12-13T13:02:20.384Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.608 13:02:20 -- common/autotest_common.sh@960 -- # wait 89192 00:17:39.928 13:02:20 -- target/tls.sh@209 -- # killprocess 89095 00:17:39.928 13:02:20 -- common/autotest_common.sh@936 -- # '[' -z 89095 ']' 00:17:39.928 13:02:20 -- common/autotest_common.sh@940 -- # kill -0 89095 00:17:39.928 13:02:20 -- common/autotest_common.sh@941 -- # uname 00:17:39.928 13:02:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.928 13:02:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89095 00:17:39.928 13:02:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:39.928 13:02:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:39.928 killing process with pid 89095 00:17:39.928 13:02:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89095' 00:17:39.928 13:02:20 -- common/autotest_common.sh@955 -- # kill 89095 00:17:39.928 13:02:20 -- common/autotest_common.sh@960 -- # wait 89095 00:17:40.188 13:02:20 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:40.188 13:02:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:40.188 13:02:20 -- target/tls.sh@212 -- # echo '{ 00:17:40.188 "subsystems": [ 00:17:40.188 { 00:17:40.188 "subsystem": "iobuf", 00:17:40.188 "config": [ 00:17:40.188 { 00:17:40.188 "method": "iobuf_set_options", 00:17:40.188 "params": { 00:17:40.188 "large_bufsize": 135168, 00:17:40.188 "large_pool_count": 1024, 00:17:40.188 "small_bufsize": 8192, 00:17:40.188 "small_pool_count": 8192 00:17:40.188 } 00:17:40.188 } 00:17:40.188 ] 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "subsystem": "sock", 00:17:40.188 "config": [ 00:17:40.188 { 00:17:40.188 "method": "sock_impl_set_options", 00:17:40.188 "params": { 00:17:40.188 "enable_ktls": false, 00:17:40.188 "enable_placement_id": 0, 00:17:40.188 "enable_quickack": false, 00:17:40.188 "enable_recv_pipe": true, 00:17:40.188 "enable_zerocopy_send_client": false, 00:17:40.188 "enable_zerocopy_send_server": true, 00:17:40.188 "impl_name": "posix", 00:17:40.188 "recv_buf_size": 2097152, 00:17:40.188 "send_buf_size": 2097152, 00:17:40.188 "tls_version": 0, 00:17:40.188 "zerocopy_threshold": 0 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "sock_impl_set_options", 00:17:40.188 "params": { 00:17:40.188 "enable_ktls": false, 00:17:40.188 "enable_placement_id": 0, 00:17:40.188 "enable_quickack": false, 00:17:40.188 "enable_recv_pipe": true, 00:17:40.188 "enable_zerocopy_send_client": false, 00:17:40.188 "enable_zerocopy_send_server": true, 00:17:40.188 "impl_name": "ssl", 00:17:40.188 "recv_buf_size": 4096, 00:17:40.188 "send_buf_size": 4096, 00:17:40.188 "tls_version": 0, 00:17:40.188 "zerocopy_threshold": 0 00:17:40.188 } 00:17:40.188 } 00:17:40.188 ] 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "subsystem": "vmd", 00:17:40.188 "config": [] 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "subsystem": "accel", 00:17:40.188 "config": [ 00:17:40.188 { 00:17:40.188 "method": "accel_set_options", 00:17:40.188 "params": { 00:17:40.188 "buf_count": 2048, 00:17:40.188 "large_cache_size": 16, 00:17:40.188 "sequence_count": 2048, 00:17:40.188 "small_cache_size": 128, 00:17:40.188 "task_count": 2048 00:17:40.188 } 00:17:40.188 } 00:17:40.188 ] 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "subsystem": "bdev", 00:17:40.188 "config": [ 00:17:40.188 { 00:17:40.188 "method": "bdev_set_options", 00:17:40.188 "params": { 00:17:40.188 "bdev_auto_examine": true, 00:17:40.188 "bdev_io_cache_size": 256, 00:17:40.188 "bdev_io_pool_size": 65535, 00:17:40.188 "iobuf_large_cache_size": 16, 00:17:40.188 "iobuf_small_cache_size": 128 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "bdev_raid_set_options", 00:17:40.188 "params": { 00:17:40.188 "process_window_size_kb": 1024 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "bdev_iscsi_set_options", 00:17:40.188 "params": { 00:17:40.188 "timeout_sec": 30 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "bdev_nvme_set_options", 00:17:40.188 "params": { 00:17:40.188 "action_on_timeout": "none", 00:17:40.188 "allow_accel_sequence": false, 00:17:40.188 "arbitration_burst": 0, 00:17:40.188 "bdev_retry_count": 3, 00:17:40.188 "ctrlr_loss_timeout_sec": 0, 00:17:40.188 "delay_cmd_submit": true, 00:17:40.188 "fast_io_fail_timeout_sec": 0, 00:17:40.188 "generate_uuids": false, 00:17:40.188 "high_priority_weight": 0, 00:17:40.188 "io_path_stat": false, 00:17:40.188 "io_queue_requests": 0, 00:17:40.188 "keep_alive_timeout_ms": 10000, 00:17:40.188 "low_priority_weight": 0, 00:17:40.188 "medium_priority_weight": 0, 00:17:40.188 "nvme_adminq_poll_period_us": 10000, 00:17:40.188 "nvme_ioq_poll_period_us": 0, 00:17:40.188 "reconnect_delay_sec": 0, 00:17:40.188 "timeout_admin_us": 0, 00:17:40.188 "timeout_us": 0, 00:17:40.188 "transport_ack_timeout": 0, 00:17:40.188 "transport_retry_count": 4, 00:17:40.188 "transport_tos": 0 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "bdev_nvme_set_hotplug", 00:17:40.188 "params": { 00:17:40.188 "enable": false, 00:17:40.188 "period_us": 100000 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "bdev_malloc_create", 00:17:40.188 "params": { 00:17:40.188 "block_size": 4096, 00:17:40.188 "name": "malloc0", 00:17:40.188 "num_blocks": 8192, 00:17:40.188 "optimal_io_boundary": 0, 00:17:40.188 "physical_block_size": 4096, 00:17:40.188 "uuid": "d54e9b98-8f29-4930-adb1-73cdbc5f2b30" 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "bdev_wait_for_examine" 00:17:40.188 } 00:17:40.188 ] 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "subsystem": "nbd", 00:17:40.188 "config": [] 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "subsystem": "scheduler", 00:17:40.188 "config": [ 00:17:40.188 { 00:17:40.188 "method": "framework_set_scheduler", 00:17:40.188 "params": { 00:17:40.188 "name": "static" 00:17:40.188 } 00:17:40.188 } 00:17:40.188 ] 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "subsystem": "nvmf", 00:17:40.188 "config": [ 00:17:40.188 { 00:17:40.188 "method": "nvmf_set_config", 00:17:40.188 "params": { 00:17:40.188 "admin_cmd_passthru": { 00:17:40.188 "identify_ctrlr": false 00:17:40.188 }, 00:17:40.188 "discovery_filter": "match_any" 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "nvmf_set_max_subsystems", 00:17:40.188 "params": { 00:17:40.188 "max_subsystems": 1024 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "nvmf_set_crdt", 00:17:40.188 "params": { 00:17:40.188 "crdt1": 0, 00:17:40.188 "crdt2": 0, 00:17:40.188 "crdt3": 0 00:17:40.188 } 00:17:40.188 }, 00:17:40.188 { 00:17:40.188 "method": "nvmf_create_transport", 00:17:40.188 "params": { 00:17:40.188 "abort_timeout_sec": 1, 00:17:40.188 "buf_cache_size": 4294967295, 00:17:40.188 "c2h_success": false, 00:17:40.188 "dif_insert_or_strip": false, 00:17:40.189 "in_capsule_data_size": 4096, 00:17:40.189 "io_unit_size": 131072, 00:17:40.189 "max_aq_depth": 128, 00:17:40.189 "max_io_qpairs_per_ctrlr": 127, 00:17:40.189 "max_io_size": 131072, 00:17:40.189 "max_queue_depth": 128, 00:17:40.189 "num_shared_buffers": 511, 00:17:40.189 "sock_priority": 0, 00:17:40.189 "trtype": "TCP", 00:17:40.189 "zcopy": false 00:17:40.189 } 00:17:40.189 }, 00:17:40.189 { 00:17:40.189 "method": "nvmf_create_subsystem", 00:17:40.189 "params": { 00:17:40.189 "allow_any_host": false, 00:17:40.189 "ana_reporting": false, 00:17:40.189 "max_cntlid": 65519, 00:17:40.189 "max_namespaces": 10, 00:17:40.189 "min_cntlid": 1, 00:17:40.189 "model_number": "SPDK bdev Controller", 00:17:40.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.189 "serial_number": "SPDK00000000000001" 00:17:40.189 } 00:17:40.189 }, 00:17:40.189 { 00:17:40.189 "method": "nvmf_subsystem_add_host", 00:17:40.189 "params": { 00:17:40.189 "host": "nqn.2016-06.io.spdk:host1", 00:17:40.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.189 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:40.189 } 00:17:40.189 }, 00:17:40.189 { 00:17:40.189 "method": "nvmf_subsystem_add_ns", 00:17:40.189 "params": { 00:17:40.189 "namespace": { 00:17:40.189 "bdev_name": "malloc0", 00:17:40.189 "nguid": "D54E9B988F294930ADB173CDBC5F2B30", 00:17:40.189 "nsid": 1, 00:17:40.189 "uuid": "d54e9b98-8f29-4930-adb1-73cdbc5f2b30" 00:17:40.189 }, 00:17:40.189 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:40.189 } 00:17:40.189 }, 00:17:40.189 { 00:17:40.189 "method": "nvmf_subsystem_add_listener", 00:17:40.189 "params": { 00:17:40.189 "listen_address": { 00:17:40.189 "adrfam": "IPv4", 00:17:40.189 "traddr": "10.0.0.2", 00:17:40.189 "trsvcid": "4420", 00:17:40.189 "trtype": "TCP" 00:17:40.189 }, 00:17:40.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.189 "secure_channel": true 00:17:40.189 } 00:17:40.189 } 00:17:40.189 ] 00:17:40.189 } 00:17:40.189 ] 00:17:40.189 }' 00:17:40.189 13:02:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.189 13:02:20 -- common/autotest_common.sh@10 -- # set +x 00:17:40.189 13:02:20 -- nvmf/common.sh@469 -- # nvmfpid=89271 00:17:40.189 13:02:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:40.189 13:02:20 -- nvmf/common.sh@470 -- # waitforlisten 89271 00:17:40.189 13:02:20 -- common/autotest_common.sh@829 -- # '[' -z 89271 ']' 00:17:40.189 13:02:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.189 13:02:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.189 13:02:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.189 13:02:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.189 13:02:20 -- common/autotest_common.sh@10 -- # set +x 00:17:40.189 [2024-12-13 13:02:20.824233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:40.189 [2024-12-13 13:02:20.824342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.189 [2024-12-13 13:02:20.949966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.447 [2024-12-13 13:02:21.017498] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:40.448 [2024-12-13 13:02:21.017664] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.448 [2024-12-13 13:02:21.017677] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.448 [2024-12-13 13:02:21.017685] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.448 [2024-12-13 13:02:21.017713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.706 [2024-12-13 13:02:21.230088] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.706 [2024-12-13 13:02:21.262049] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.706 [2024-12-13 13:02:21.262292] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.275 13:02:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.275 13:02:21 -- common/autotest_common.sh@862 -- # return 0 00:17:41.275 13:02:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:41.275 13:02:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:41.275 13:02:21 -- common/autotest_common.sh@10 -- # set +x 00:17:41.275 13:02:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.275 13:02:21 -- target/tls.sh@216 -- # bdevperf_pid=89315 00:17:41.275 13:02:21 -- target/tls.sh@217 -- # waitforlisten 89315 /var/tmp/bdevperf.sock 00:17:41.275 13:02:21 -- common/autotest_common.sh@829 -- # '[' -z 89315 ']' 00:17:41.275 13:02:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.275 13:02:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.275 13:02:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.276 13:02:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.276 13:02:21 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:41.276 13:02:21 -- common/autotest_common.sh@10 -- # set +x 00:17:41.276 13:02:21 -- target/tls.sh@213 -- # echo '{ 00:17:41.276 "subsystems": [ 00:17:41.276 { 00:17:41.276 "subsystem": "iobuf", 00:17:41.276 "config": [ 00:17:41.276 { 00:17:41.276 "method": "iobuf_set_options", 00:17:41.276 "params": { 00:17:41.276 "large_bufsize": 135168, 00:17:41.276 "large_pool_count": 1024, 00:17:41.276 "small_bufsize": 8192, 00:17:41.276 "small_pool_count": 8192 00:17:41.276 } 00:17:41.276 } 00:17:41.276 ] 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "subsystem": "sock", 00:17:41.276 "config": [ 00:17:41.276 { 00:17:41.276 "method": "sock_impl_set_options", 00:17:41.276 "params": { 00:17:41.276 "enable_ktls": false, 00:17:41.276 "enable_placement_id": 0, 00:17:41.276 "enable_quickack": false, 00:17:41.276 "enable_recv_pipe": true, 00:17:41.276 "enable_zerocopy_send_client": false, 00:17:41.276 "enable_zerocopy_send_server": true, 00:17:41.276 "impl_name": "posix", 00:17:41.276 "recv_buf_size": 2097152, 00:17:41.276 "send_buf_size": 2097152, 00:17:41.276 "tls_version": 0, 00:17:41.276 "zerocopy_threshold": 0 00:17:41.276 } 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "method": "sock_impl_set_options", 00:17:41.276 "params": { 00:17:41.276 "enable_ktls": false, 00:17:41.276 "enable_placement_id": 0, 00:17:41.276 "enable_quickack": false, 00:17:41.276 "enable_recv_pipe": true, 00:17:41.276 "enable_zerocopy_send_client": false, 00:17:41.276 "enable_zerocopy_send_server": true, 00:17:41.276 "impl_name": "ssl", 00:17:41.276 "recv_buf_size": 4096, 00:17:41.276 "send_buf_size": 4096, 00:17:41.276 "tls_version": 0, 00:17:41.276 "zerocopy_threshold": 0 00:17:41.276 } 00:17:41.276 } 00:17:41.276 ] 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "subsystem": "vmd", 00:17:41.276 "config": [] 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "subsystem": "accel", 00:17:41.276 "config": [ 00:17:41.276 { 00:17:41.276 "method": "accel_set_options", 00:17:41.276 "params": { 00:17:41.276 "buf_count": 2048, 00:17:41.276 "large_cache_size": 16, 00:17:41.276 "sequence_count": 2048, 00:17:41.276 "small_cache_size": 128, 00:17:41.276 "task_count": 2048 00:17:41.276 } 00:17:41.276 } 00:17:41.276 ] 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "subsystem": "bdev", 00:17:41.276 "config": [ 00:17:41.276 { 00:17:41.276 "method": "bdev_set_options", 00:17:41.276 "params": { 00:17:41.276 "bdev_auto_examine": true, 00:17:41.276 "bdev_io_cache_size": 256, 00:17:41.276 "bdev_io_pool_size": 65535, 00:17:41.276 "iobuf_large_cache_size": 16, 00:17:41.276 "iobuf_small_cache_size": 128 00:17:41.276 } 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "method": "bdev_raid_set_options", 00:17:41.276 "params": { 00:17:41.276 "process_window_size_kb": 1024 00:17:41.276 } 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "method": "bdev_iscsi_set_options", 00:17:41.276 "params": { 00:17:41.276 "timeout_sec": 30 00:17:41.276 } 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "method": "bdev_nvme_set_options", 00:17:41.276 "params": { 00:17:41.276 "action_on_timeout": "none", 00:17:41.276 "allow_accel_sequence": false, 00:17:41.276 "arbitration_burst": 0, 00:17:41.276 "bdev_retry_count": 3, 00:17:41.276 "ctrlr_loss_timeout_sec": 0, 00:17:41.276 "delay_cmd_submit": true, 00:17:41.276 "fast_io_fail_timeout_sec": 0, 00:17:41.276 "generate_uuids": false, 00:17:41.276 "high_priority_weight": 0, 00:17:41.276 "io_path_stat": false, 00:17:41.276 "io_queue_requests": 512, 00:17:41.276 "keep_alive_timeout_ms": 10000, 00:17:41.276 "low_priority_weight": 0, 00:17:41.276 "medium_priority_weight": 0, 00:17:41.276 "nvme_adminq_poll_period_us": 10000, 00:17:41.276 "nvme_ioq_poll_period_us": 0, 00:17:41.276 "reconnect_delay_sec": 0, 00:17:41.276 "timeout_admin_us": 0, 00:17:41.276 "timeout_us": 0, 00:17:41.276 "transport_ack_timeout": 0, 00:17:41.276 "transport_retry_count": 4, 00:17:41.276 "transport_tos": 0 00:17:41.276 } 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "method": "bdev_nvme_attach_controller", 00:17:41.276 "params": { 00:17:41.276 "adrfam": "IPv4", 00:17:41.276 "ctrlr_loss_timeout_sec": 0, 00:17:41.276 "ddgst": false, 00:17:41.276 "fast_io_fail_timeout_sec": 0, 00:17:41.276 "hdgst": false, 00:17:41.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.276 "name": "TLSTEST", 00:17:41.276 "prchk_guard": false, 00:17:41.276 "prchk_reftag": false, 00:17:41.276 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:41.276 "reconnect_delay_sec": 0, 00:17:41.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.276 "traddr": "10.0.0.2", 00:17:41.276 "trsvcid": "4420", 00:17:41.276 "trtype": "TCP" 00:17:41.276 } 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "method": "bdev_nvme_set_hotplug", 00:17:41.276 "params": { 00:17:41.276 "enable": false, 00:17:41.276 "period_us": 100000 00:17:41.276 } 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "method": "bdev_wait_for_examine" 00:17:41.276 } 00:17:41.276 ] 00:17:41.276 }, 00:17:41.276 { 00:17:41.276 "subsystem": "nbd", 00:17:41.276 "config": [] 00:17:41.276 } 00:17:41.276 ] 00:17:41.276 }' 00:17:41.276 [2024-12-13 13:02:21.904156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:41.276 [2024-12-13 13:02:21.904267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89315 ] 00:17:41.276 [2024-12-13 13:02:22.045539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.535 [2024-12-13 13:02:22.119083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.535 [2024-12-13 13:02:22.274356] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.471 13:02:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.471 13:02:22 -- common/autotest_common.sh@862 -- # return 0 00:17:42.471 13:02:22 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:42.471 Running I/O for 10 seconds... 00:17:52.449 00:17:52.449 Latency(us) 00:17:52.449 [2024-12-13T13:02:33.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.449 [2024-12-13T13:02:33.225Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:52.449 Verification LBA range: start 0x0 length 0x2000 00:17:52.449 TLSTESTn1 : 10.02 6129.83 23.94 0.00 0.00 20847.66 4230.05 18707.55 00:17:52.449 [2024-12-13T13:02:33.225Z] =================================================================================================================== 00:17:52.449 [2024-12-13T13:02:33.225Z] Total : 6129.83 23.94 0.00 0.00 20847.66 4230.05 18707.55 00:17:52.449 0 00:17:52.449 13:02:33 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:52.449 13:02:33 -- target/tls.sh@223 -- # killprocess 89315 00:17:52.449 13:02:33 -- common/autotest_common.sh@936 -- # '[' -z 89315 ']' 00:17:52.449 13:02:33 -- common/autotest_common.sh@940 -- # kill -0 89315 00:17:52.449 13:02:33 -- common/autotest_common.sh@941 -- # uname 00:17:52.449 13:02:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:52.449 13:02:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89315 00:17:52.449 13:02:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:52.449 13:02:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:52.449 killing process with pid 89315 00:17:52.449 13:02:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89315' 00:17:52.449 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.449 00:17:52.449 Latency(us) 00:17:52.449 [2024-12-13T13:02:33.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.449 [2024-12-13T13:02:33.225Z] =================================================================================================================== 00:17:52.449 [2024-12-13T13:02:33.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:52.449 13:02:33 -- common/autotest_common.sh@955 -- # kill 89315 00:17:52.449 13:02:33 -- common/autotest_common.sh@960 -- # wait 89315 00:17:52.708 13:02:33 -- target/tls.sh@224 -- # killprocess 89271 00:17:52.708 13:02:33 -- common/autotest_common.sh@936 -- # '[' -z 89271 ']' 00:17:52.708 13:02:33 -- common/autotest_common.sh@940 -- # kill -0 89271 00:17:52.708 13:02:33 -- common/autotest_common.sh@941 -- # uname 00:17:52.708 13:02:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:52.708 13:02:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89271 00:17:52.708 killing process with pid 89271 00:17:52.708 13:02:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:52.708 13:02:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:52.708 13:02:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89271' 00:17:52.708 13:02:33 -- common/autotest_common.sh@955 -- # kill 89271 00:17:52.708 13:02:33 -- common/autotest_common.sh@960 -- # wait 89271 00:17:52.968 13:02:33 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:52.968 13:02:33 -- target/tls.sh@227 -- # cleanup 00:17:52.968 13:02:33 -- target/tls.sh@15 -- # process_shm --id 0 00:17:52.968 13:02:33 -- common/autotest_common.sh@806 -- # type=--id 00:17:52.968 13:02:33 -- common/autotest_common.sh@807 -- # id=0 00:17:52.968 13:02:33 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:52.968 13:02:33 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:52.968 13:02:33 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:52.968 13:02:33 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:52.968 13:02:33 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:52.968 13:02:33 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:52.968 nvmf_trace.0 00:17:52.968 13:02:33 -- common/autotest_common.sh@821 -- # return 0 00:17:52.968 13:02:33 -- target/tls.sh@16 -- # killprocess 89315 00:17:52.968 13:02:33 -- common/autotest_common.sh@936 -- # '[' -z 89315 ']' 00:17:52.968 Process with pid 89315 is not found 00:17:52.968 13:02:33 -- common/autotest_common.sh@940 -- # kill -0 89315 00:17:52.968 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89315) - No such process 00:17:52.968 13:02:33 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89315 is not found' 00:17:52.968 13:02:33 -- target/tls.sh@17 -- # nvmftestfini 00:17:52.968 13:02:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:52.968 13:02:33 -- nvmf/common.sh@116 -- # sync 00:17:52.968 13:02:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:52.968 13:02:33 -- nvmf/common.sh@119 -- # set +e 00:17:52.968 13:02:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:52.968 13:02:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:52.968 rmmod nvme_tcp 00:17:52.968 rmmod nvme_fabrics 00:17:52.968 rmmod nvme_keyring 00:17:52.968 13:02:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:52.968 13:02:33 -- nvmf/common.sh@123 -- # set -e 00:17:52.968 13:02:33 -- nvmf/common.sh@124 -- # return 0 00:17:52.968 13:02:33 -- nvmf/common.sh@477 -- # '[' -n 89271 ']' 00:17:52.968 13:02:33 -- nvmf/common.sh@478 -- # killprocess 89271 00:17:52.968 13:02:33 -- common/autotest_common.sh@936 -- # '[' -z 89271 ']' 00:17:52.968 13:02:33 -- common/autotest_common.sh@940 -- # kill -0 89271 00:17:52.968 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89271) - No such process 00:17:52.968 Process with pid 89271 is not found 00:17:52.968 13:02:33 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89271 is not found' 00:17:52.968 13:02:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:52.968 13:02:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:52.968 13:02:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:52.968 13:02:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:52.968 13:02:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:52.968 13:02:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.968 13:02:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.968 13:02:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.968 13:02:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:52.968 13:02:33 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:52.968 ************************************ 00:17:52.968 END TEST nvmf_tls 00:17:52.968 ************************************ 00:17:52.968 00:17:52.968 real 1m10.094s 00:17:52.968 user 1m47.967s 00:17:52.968 sys 0m24.721s 00:17:52.968 13:02:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:52.968 13:02:33 -- common/autotest_common.sh@10 -- # set +x 00:17:53.227 13:02:33 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:53.227 13:02:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:53.227 13:02:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.227 13:02:33 -- common/autotest_common.sh@10 -- # set +x 00:17:53.227 ************************************ 00:17:53.227 START TEST nvmf_fips 00:17:53.227 ************************************ 00:17:53.227 13:02:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:53.227 * Looking for test storage... 00:17:53.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:53.227 13:02:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:53.227 13:02:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:53.227 13:02:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:53.227 13:02:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:53.228 13:02:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:53.228 13:02:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:53.228 13:02:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:53.228 13:02:33 -- scripts/common.sh@335 -- # IFS=.-: 00:17:53.228 13:02:33 -- scripts/common.sh@335 -- # read -ra ver1 00:17:53.228 13:02:33 -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.228 13:02:33 -- scripts/common.sh@336 -- # read -ra ver2 00:17:53.228 13:02:33 -- scripts/common.sh@337 -- # local 'op=<' 00:17:53.228 13:02:33 -- scripts/common.sh@339 -- # ver1_l=2 00:17:53.228 13:02:33 -- scripts/common.sh@340 -- # ver2_l=1 00:17:53.228 13:02:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:53.228 13:02:33 -- scripts/common.sh@343 -- # case "$op" in 00:17:53.228 13:02:33 -- scripts/common.sh@344 -- # : 1 00:17:53.228 13:02:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:53.228 13:02:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.228 13:02:33 -- scripts/common.sh@364 -- # decimal 1 00:17:53.228 13:02:33 -- scripts/common.sh@352 -- # local d=1 00:17:53.228 13:02:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.228 13:02:33 -- scripts/common.sh@354 -- # echo 1 00:17:53.228 13:02:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:53.228 13:02:33 -- scripts/common.sh@365 -- # decimal 2 00:17:53.228 13:02:33 -- scripts/common.sh@352 -- # local d=2 00:17:53.228 13:02:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.228 13:02:33 -- scripts/common.sh@354 -- # echo 2 00:17:53.228 13:02:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:53.228 13:02:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:53.228 13:02:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:53.228 13:02:33 -- scripts/common.sh@367 -- # return 0 00:17:53.228 13:02:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.228 13:02:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.228 --rc genhtml_branch_coverage=1 00:17:53.228 --rc genhtml_function_coverage=1 00:17:53.228 --rc genhtml_legend=1 00:17:53.228 --rc geninfo_all_blocks=1 00:17:53.228 --rc geninfo_unexecuted_blocks=1 00:17:53.228 00:17:53.228 ' 00:17:53.228 13:02:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.228 --rc genhtml_branch_coverage=1 00:17:53.228 --rc genhtml_function_coverage=1 00:17:53.228 --rc genhtml_legend=1 00:17:53.228 --rc geninfo_all_blocks=1 00:17:53.228 --rc geninfo_unexecuted_blocks=1 00:17:53.228 00:17:53.228 ' 00:17:53.228 13:02:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.228 --rc genhtml_branch_coverage=1 00:17:53.228 --rc genhtml_function_coverage=1 00:17:53.228 --rc genhtml_legend=1 00:17:53.228 --rc geninfo_all_blocks=1 00:17:53.228 --rc geninfo_unexecuted_blocks=1 00:17:53.228 00:17:53.228 ' 00:17:53.228 13:02:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.228 --rc genhtml_branch_coverage=1 00:17:53.228 --rc genhtml_function_coverage=1 00:17:53.228 --rc genhtml_legend=1 00:17:53.228 --rc geninfo_all_blocks=1 00:17:53.228 --rc geninfo_unexecuted_blocks=1 00:17:53.228 00:17:53.228 ' 00:17:53.228 13:02:33 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:53.228 13:02:33 -- nvmf/common.sh@7 -- # uname -s 00:17:53.228 13:02:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.228 13:02:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.228 13:02:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.228 13:02:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.228 13:02:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.228 13:02:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.228 13:02:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.228 13:02:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.228 13:02:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.228 13:02:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.228 13:02:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:17:53.228 13:02:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:17:53.228 13:02:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.228 13:02:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.228 13:02:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:53.228 13:02:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:53.228 13:02:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.228 13:02:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.228 13:02:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.228 13:02:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.228 13:02:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.228 13:02:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.228 13:02:33 -- paths/export.sh@5 -- # export PATH 00:17:53.228 13:02:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.228 13:02:33 -- nvmf/common.sh@46 -- # : 0 00:17:53.228 13:02:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:53.228 13:02:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:53.228 13:02:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:53.228 13:02:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.228 13:02:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.228 13:02:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:53.228 13:02:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:53.228 13:02:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:53.228 13:02:33 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:53.228 13:02:33 -- fips/fips.sh@89 -- # check_openssl_version 00:17:53.228 13:02:33 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:53.228 13:02:33 -- fips/fips.sh@85 -- # openssl version 00:17:53.228 13:02:33 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:53.487 13:02:34 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:53.487 13:02:34 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:53.487 13:02:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:53.487 13:02:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:53.487 13:02:34 -- scripts/common.sh@335 -- # IFS=.-: 00:17:53.487 13:02:34 -- scripts/common.sh@335 -- # read -ra ver1 00:17:53.487 13:02:34 -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.487 13:02:34 -- scripts/common.sh@336 -- # read -ra ver2 00:17:53.487 13:02:34 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:53.487 13:02:34 -- scripts/common.sh@339 -- # ver1_l=3 00:17:53.487 13:02:34 -- scripts/common.sh@340 -- # ver2_l=3 00:17:53.487 13:02:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:53.487 13:02:34 -- scripts/common.sh@343 -- # case "$op" in 00:17:53.487 13:02:34 -- scripts/common.sh@347 -- # : 1 00:17:53.487 13:02:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:53.487 13:02:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.487 13:02:34 -- scripts/common.sh@364 -- # decimal 3 00:17:53.487 13:02:34 -- scripts/common.sh@352 -- # local d=3 00:17:53.487 13:02:34 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:53.487 13:02:34 -- scripts/common.sh@354 -- # echo 3 00:17:53.487 13:02:34 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:53.487 13:02:34 -- scripts/common.sh@365 -- # decimal 3 00:17:53.487 13:02:34 -- scripts/common.sh@352 -- # local d=3 00:17:53.487 13:02:34 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:53.487 13:02:34 -- scripts/common.sh@354 -- # echo 3 00:17:53.487 13:02:34 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:53.487 13:02:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:53.488 13:02:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:53.488 13:02:34 -- scripts/common.sh@363 -- # (( v++ )) 00:17:53.488 13:02:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.488 13:02:34 -- scripts/common.sh@364 -- # decimal 1 00:17:53.488 13:02:34 -- scripts/common.sh@352 -- # local d=1 00:17:53.488 13:02:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.488 13:02:34 -- scripts/common.sh@354 -- # echo 1 00:17:53.488 13:02:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:53.488 13:02:34 -- scripts/common.sh@365 -- # decimal 0 00:17:53.488 13:02:34 -- scripts/common.sh@352 -- # local d=0 00:17:53.488 13:02:34 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:53.488 13:02:34 -- scripts/common.sh@354 -- # echo 0 00:17:53.488 13:02:34 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:53.488 13:02:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:53.488 13:02:34 -- scripts/common.sh@366 -- # return 0 00:17:53.488 13:02:34 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:53.488 13:02:34 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:53.488 13:02:34 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:53.488 13:02:34 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:53.488 13:02:34 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:53.488 13:02:34 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:53.488 13:02:34 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:53.488 13:02:34 -- fips/fips.sh@113 -- # build_openssl_config 00:17:53.488 13:02:34 -- fips/fips.sh@37 -- # cat 00:17:53.488 13:02:34 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:53.488 13:02:34 -- fips/fips.sh@58 -- # cat - 00:17:53.488 13:02:34 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:53.488 13:02:34 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:53.488 13:02:34 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:53.488 13:02:34 -- fips/fips.sh@116 -- # openssl list -providers 00:17:53.488 13:02:34 -- fips/fips.sh@116 -- # grep name 00:17:53.488 13:02:34 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:53.488 13:02:34 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:53.488 13:02:34 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:53.488 13:02:34 -- fips/fips.sh@127 -- # : 00:17:53.488 13:02:34 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:53.488 13:02:34 -- common/autotest_common.sh@650 -- # local es=0 00:17:53.488 13:02:34 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:53.488 13:02:34 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:53.488 13:02:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.488 13:02:34 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:53.488 13:02:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.488 13:02:34 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:53.488 13:02:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.488 13:02:34 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:53.488 13:02:34 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:53.488 13:02:34 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:53.488 Error setting digest 00:17:53.488 40A29DF3087F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:53.488 40A29DF3087F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:53.488 13:02:34 -- common/autotest_common.sh@653 -- # es=1 00:17:53.488 13:02:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:53.488 13:02:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:53.488 13:02:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:53.488 13:02:34 -- fips/fips.sh@130 -- # nvmftestinit 00:17:53.488 13:02:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:53.488 13:02:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.488 13:02:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:53.488 13:02:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:53.488 13:02:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:53.488 13:02:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.488 13:02:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.488 13:02:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.488 13:02:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:53.488 13:02:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:53.488 13:02:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:53.488 13:02:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:53.488 13:02:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:53.488 13:02:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:53.488 13:02:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.488 13:02:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.488 13:02:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:53.488 13:02:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:53.488 13:02:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:53.488 13:02:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:53.488 13:02:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:53.488 13:02:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.488 13:02:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:53.488 13:02:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:53.488 13:02:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:53.488 13:02:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:53.488 13:02:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:53.488 13:02:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:53.488 Cannot find device "nvmf_tgt_br" 00:17:53.488 13:02:34 -- nvmf/common.sh@154 -- # true 00:17:53.488 13:02:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.488 Cannot find device "nvmf_tgt_br2" 00:17:53.488 13:02:34 -- nvmf/common.sh@155 -- # true 00:17:53.488 13:02:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:53.488 13:02:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:53.488 Cannot find device "nvmf_tgt_br" 00:17:53.488 13:02:34 -- nvmf/common.sh@157 -- # true 00:17:53.488 13:02:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:53.488 Cannot find device "nvmf_tgt_br2" 00:17:53.488 13:02:34 -- nvmf/common.sh@158 -- # true 00:17:53.488 13:02:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:53.747 13:02:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:53.747 13:02:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.747 13:02:34 -- nvmf/common.sh@161 -- # true 00:17:53.747 13:02:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.747 13:02:34 -- nvmf/common.sh@162 -- # true 00:17:53.747 13:02:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:53.747 13:02:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:53.747 13:02:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:53.747 13:02:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:53.747 13:02:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:53.747 13:02:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:53.747 13:02:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:53.747 13:02:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:53.747 13:02:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:53.747 13:02:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:53.747 13:02:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:53.747 13:02:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:53.747 13:02:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:53.747 13:02:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:53.747 13:02:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:53.747 13:02:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:53.747 13:02:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:53.747 13:02:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:53.747 13:02:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:53.747 13:02:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:53.747 13:02:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:53.747 13:02:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:53.747 13:02:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:53.747 13:02:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:53.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:53.747 00:17:53.747 --- 10.0.0.2 ping statistics --- 00:17:53.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.747 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:53.747 13:02:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:53.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:53.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:53.747 00:17:53.747 --- 10.0.0.3 ping statistics --- 00:17:53.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.747 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:53.747 13:02:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:53.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:53.747 00:17:53.747 --- 10.0.0.1 ping statistics --- 00:17:53.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.747 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:53.747 13:02:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.747 13:02:34 -- nvmf/common.sh@421 -- # return 0 00:17:53.747 13:02:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:53.747 13:02:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.747 13:02:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:53.747 13:02:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:53.747 13:02:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.747 13:02:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:53.747 13:02:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:53.747 13:02:34 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:53.747 13:02:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:53.747 13:02:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.747 13:02:34 -- common/autotest_common.sh@10 -- # set +x 00:17:53.747 13:02:34 -- nvmf/common.sh@469 -- # nvmfpid=89684 00:17:53.747 13:02:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:53.747 13:02:34 -- nvmf/common.sh@470 -- # waitforlisten 89684 00:17:53.747 13:02:34 -- common/autotest_common.sh@829 -- # '[' -z 89684 ']' 00:17:53.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.747 13:02:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.747 13:02:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.747 13:02:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.747 13:02:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.747 13:02:34 -- common/autotest_common.sh@10 -- # set +x 00:17:54.006 [2024-12-13 13:02:34.581632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:54.006 [2024-12-13 13:02:34.581930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.007 [2024-12-13 13:02:34.720051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.265 [2024-12-13 13:02:34.789606] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:54.265 [2024-12-13 13:02:34.790047] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.265 [2024-12-13 13:02:34.790071] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.265 [2024-12-13 13:02:34.790081] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.265 [2024-12-13 13:02:34.790116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.832 13:02:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.832 13:02:35 -- common/autotest_common.sh@862 -- # return 0 00:17:54.832 13:02:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:54.832 13:02:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:54.832 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:17:55.091 13:02:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.091 13:02:35 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:55.091 13:02:35 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:55.091 13:02:35 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:55.091 13:02:35 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:55.091 13:02:35 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:55.091 13:02:35 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:55.091 13:02:35 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:55.091 13:02:35 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.350 [2024-12-13 13:02:35.886445] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.350 [2024-12-13 13:02:35.902427] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:55.350 [2024-12-13 13:02:35.902616] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.350 malloc0 00:17:55.350 13:02:35 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.350 13:02:35 -- fips/fips.sh@147 -- # bdevperf_pid=89742 00:17:55.350 13:02:35 -- fips/fips.sh@148 -- # waitforlisten 89742 /var/tmp/bdevperf.sock 00:17:55.350 13:02:35 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.350 13:02:35 -- common/autotest_common.sh@829 -- # '[' -z 89742 ']' 00:17:55.350 13:02:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.350 13:02:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.350 13:02:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.350 13:02:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.350 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:17:55.350 [2024-12-13 13:02:36.037623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:55.350 [2024-12-13 13:02:36.037931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89742 ] 00:17:55.608 [2024-12-13 13:02:36.177299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.608 [2024-12-13 13:02:36.241131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.175 13:02:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.175 13:02:36 -- common/autotest_common.sh@862 -- # return 0 00:17:56.175 13:02:36 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:56.433 [2024-12-13 13:02:37.157080] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.692 TLSTESTn1 00:17:56.692 13:02:37 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.692 Running I/O for 10 seconds... 00:18:06.665 00:18:06.665 Latency(us) 00:18:06.665 [2024-12-13T13:02:47.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.665 [2024-12-13T13:02:47.441Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:06.665 Verification LBA range: start 0x0 length 0x2000 00:18:06.665 TLSTESTn1 : 10.02 6062.94 23.68 0.00 0.00 21076.71 4349.21 18826.71 00:18:06.665 [2024-12-13T13:02:47.441Z] =================================================================================================================== 00:18:06.665 [2024-12-13T13:02:47.441Z] Total : 6062.94 23.68 0.00 0.00 21076.71 4349.21 18826.71 00:18:06.665 0 00:18:06.665 13:02:47 -- fips/fips.sh@1 -- # cleanup 00:18:06.665 13:02:47 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:06.665 13:02:47 -- common/autotest_common.sh@806 -- # type=--id 00:18:06.665 13:02:47 -- common/autotest_common.sh@807 -- # id=0 00:18:06.665 13:02:47 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:06.665 13:02:47 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:06.665 13:02:47 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:06.665 13:02:47 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:06.665 13:02:47 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:06.665 13:02:47 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:06.665 nvmf_trace.0 00:18:06.924 13:02:47 -- common/autotest_common.sh@821 -- # return 0 00:18:06.924 13:02:47 -- fips/fips.sh@16 -- # killprocess 89742 00:18:06.924 13:02:47 -- common/autotest_common.sh@936 -- # '[' -z 89742 ']' 00:18:06.924 13:02:47 -- common/autotest_common.sh@940 -- # kill -0 89742 00:18:06.924 13:02:47 -- common/autotest_common.sh@941 -- # uname 00:18:06.924 13:02:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:06.924 13:02:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89742 00:18:06.924 13:02:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:06.924 13:02:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:06.924 killing process with pid 89742 00:18:06.924 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.924 00:18:06.924 Latency(us) 00:18:06.924 [2024-12-13T13:02:47.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.924 [2024-12-13T13:02:47.700Z] =================================================================================================================== 00:18:06.924 [2024-12-13T13:02:47.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.924 13:02:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89742' 00:18:06.924 13:02:47 -- common/autotest_common.sh@955 -- # kill 89742 00:18:06.924 13:02:47 -- common/autotest_common.sh@960 -- # wait 89742 00:18:06.924 13:02:47 -- fips/fips.sh@17 -- # nvmftestfini 00:18:06.924 13:02:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:06.924 13:02:47 -- nvmf/common.sh@116 -- # sync 00:18:07.183 13:02:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:07.183 13:02:47 -- nvmf/common.sh@119 -- # set +e 00:18:07.183 13:02:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:07.183 13:02:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:07.183 rmmod nvme_tcp 00:18:07.183 rmmod nvme_fabrics 00:18:07.183 rmmod nvme_keyring 00:18:07.183 13:02:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:07.183 13:02:47 -- nvmf/common.sh@123 -- # set -e 00:18:07.183 13:02:47 -- nvmf/common.sh@124 -- # return 0 00:18:07.183 13:02:47 -- nvmf/common.sh@477 -- # '[' -n 89684 ']' 00:18:07.183 13:02:47 -- nvmf/common.sh@478 -- # killprocess 89684 00:18:07.183 13:02:47 -- common/autotest_common.sh@936 -- # '[' -z 89684 ']' 00:18:07.183 13:02:47 -- common/autotest_common.sh@940 -- # kill -0 89684 00:18:07.183 13:02:47 -- common/autotest_common.sh@941 -- # uname 00:18:07.183 13:02:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.183 13:02:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89684 00:18:07.183 13:02:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:07.183 killing process with pid 89684 00:18:07.183 13:02:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:07.183 13:02:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89684' 00:18:07.183 13:02:47 -- common/autotest_common.sh@955 -- # kill 89684 00:18:07.183 13:02:47 -- common/autotest_common.sh@960 -- # wait 89684 00:18:07.442 13:02:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:07.442 13:02:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:07.442 13:02:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:07.442 13:02:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.442 13:02:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:07.442 13:02:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.442 13:02:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.442 13:02:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.442 13:02:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:07.442 13:02:48 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:07.442 00:18:07.442 real 0m14.254s 00:18:07.442 user 0m18.977s 00:18:07.442 sys 0m5.918s 00:18:07.442 13:02:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:07.442 13:02:48 -- common/autotest_common.sh@10 -- # set +x 00:18:07.442 ************************************ 00:18:07.442 END TEST nvmf_fips 00:18:07.442 ************************************ 00:18:07.442 13:02:48 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:07.442 13:02:48 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:07.442 13:02:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:07.442 13:02:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.442 13:02:48 -- common/autotest_common.sh@10 -- # set +x 00:18:07.442 ************************************ 00:18:07.442 START TEST nvmf_fuzz 00:18:07.442 ************************************ 00:18:07.442 13:02:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:07.442 * Looking for test storage... 00:18:07.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:07.442 13:02:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:07.442 13:02:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:07.442 13:02:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:07.702 13:02:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:07.702 13:02:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:07.702 13:02:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:07.702 13:02:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:07.702 13:02:48 -- scripts/common.sh@335 -- # IFS=.-: 00:18:07.702 13:02:48 -- scripts/common.sh@335 -- # read -ra ver1 00:18:07.702 13:02:48 -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.702 13:02:48 -- scripts/common.sh@336 -- # read -ra ver2 00:18:07.702 13:02:48 -- scripts/common.sh@337 -- # local 'op=<' 00:18:07.702 13:02:48 -- scripts/common.sh@339 -- # ver1_l=2 00:18:07.702 13:02:48 -- scripts/common.sh@340 -- # ver2_l=1 00:18:07.702 13:02:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:07.702 13:02:48 -- scripts/common.sh@343 -- # case "$op" in 00:18:07.702 13:02:48 -- scripts/common.sh@344 -- # : 1 00:18:07.702 13:02:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:07.702 13:02:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.702 13:02:48 -- scripts/common.sh@364 -- # decimal 1 00:18:07.702 13:02:48 -- scripts/common.sh@352 -- # local d=1 00:18:07.702 13:02:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.702 13:02:48 -- scripts/common.sh@354 -- # echo 1 00:18:07.702 13:02:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:07.702 13:02:48 -- scripts/common.sh@365 -- # decimal 2 00:18:07.702 13:02:48 -- scripts/common.sh@352 -- # local d=2 00:18:07.702 13:02:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.702 13:02:48 -- scripts/common.sh@354 -- # echo 2 00:18:07.702 13:02:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:07.702 13:02:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:07.702 13:02:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:07.702 13:02:48 -- scripts/common.sh@367 -- # return 0 00:18:07.702 13:02:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.702 13:02:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:07.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.702 --rc genhtml_branch_coverage=1 00:18:07.702 --rc genhtml_function_coverage=1 00:18:07.702 --rc genhtml_legend=1 00:18:07.702 --rc geninfo_all_blocks=1 00:18:07.702 --rc geninfo_unexecuted_blocks=1 00:18:07.702 00:18:07.702 ' 00:18:07.702 13:02:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:07.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.702 --rc genhtml_branch_coverage=1 00:18:07.702 --rc genhtml_function_coverage=1 00:18:07.702 --rc genhtml_legend=1 00:18:07.702 --rc geninfo_all_blocks=1 00:18:07.702 --rc geninfo_unexecuted_blocks=1 00:18:07.702 00:18:07.702 ' 00:18:07.702 13:02:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:07.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.702 --rc genhtml_branch_coverage=1 00:18:07.702 --rc genhtml_function_coverage=1 00:18:07.702 --rc genhtml_legend=1 00:18:07.702 --rc geninfo_all_blocks=1 00:18:07.702 --rc geninfo_unexecuted_blocks=1 00:18:07.702 00:18:07.702 ' 00:18:07.702 13:02:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:07.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.702 --rc genhtml_branch_coverage=1 00:18:07.702 --rc genhtml_function_coverage=1 00:18:07.702 --rc genhtml_legend=1 00:18:07.702 --rc geninfo_all_blocks=1 00:18:07.702 --rc geninfo_unexecuted_blocks=1 00:18:07.702 00:18:07.702 ' 00:18:07.702 13:02:48 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.702 13:02:48 -- nvmf/common.sh@7 -- # uname -s 00:18:07.702 13:02:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.702 13:02:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.702 13:02:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.702 13:02:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.702 13:02:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.702 13:02:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.702 13:02:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.702 13:02:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.702 13:02:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.702 13:02:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.702 13:02:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:18:07.702 13:02:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:18:07.702 13:02:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.702 13:02:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.702 13:02:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.702 13:02:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.702 13:02:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.702 13:02:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.702 13:02:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.702 13:02:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.702 13:02:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.702 13:02:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.702 13:02:48 -- paths/export.sh@5 -- # export PATH 00:18:07.702 13:02:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.702 13:02:48 -- nvmf/common.sh@46 -- # : 0 00:18:07.702 13:02:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:07.702 13:02:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:07.702 13:02:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:07.702 13:02:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.702 13:02:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.702 13:02:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:07.702 13:02:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:07.702 13:02:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:07.702 13:02:48 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:07.702 13:02:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:07.702 13:02:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.702 13:02:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:07.702 13:02:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:07.702 13:02:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:07.702 13:02:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.702 13:02:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.702 13:02:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.702 13:02:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:07.703 13:02:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:07.703 13:02:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:07.703 13:02:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:07.703 13:02:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:07.703 13:02:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:07.703 13:02:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.703 13:02:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.703 13:02:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:07.703 13:02:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:07.703 13:02:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.703 13:02:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.703 13:02:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.703 13:02:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.703 13:02:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.703 13:02:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.703 13:02:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.703 13:02:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.703 13:02:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:07.703 13:02:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:07.703 Cannot find device "nvmf_tgt_br" 00:18:07.703 13:02:48 -- nvmf/common.sh@154 -- # true 00:18:07.703 13:02:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.703 Cannot find device "nvmf_tgt_br2" 00:18:07.703 13:02:48 -- nvmf/common.sh@155 -- # true 00:18:07.703 13:02:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:07.703 13:02:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:07.703 Cannot find device "nvmf_tgt_br" 00:18:07.703 13:02:48 -- nvmf/common.sh@157 -- # true 00:18:07.703 13:02:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:07.703 Cannot find device "nvmf_tgt_br2" 00:18:07.703 13:02:48 -- nvmf/common.sh@158 -- # true 00:18:07.703 13:02:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:07.703 13:02:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:07.703 13:02:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.703 13:02:48 -- nvmf/common.sh@161 -- # true 00:18:07.703 13:02:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.703 13:02:48 -- nvmf/common.sh@162 -- # true 00:18:07.703 13:02:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.703 13:02:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.703 13:02:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.703 13:02:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.703 13:02:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.703 13:02:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.703 13:02:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.703 13:02:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:07.703 13:02:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:07.961 13:02:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:07.961 13:02:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:07.961 13:02:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:07.961 13:02:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:07.961 13:02:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.961 13:02:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.961 13:02:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.961 13:02:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:07.961 13:02:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:07.961 13:02:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.961 13:02:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.961 13:02:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.961 13:02:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.962 13:02:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:07.962 13:02:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:07.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:18:07.962 00:18:07.962 --- 10.0.0.2 ping statistics --- 00:18:07.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.962 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:07.962 13:02:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:07.962 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:07.962 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:18:07.962 00:18:07.962 --- 10.0.0.3 ping statistics --- 00:18:07.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.962 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:07.962 13:02:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:07.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:18:07.962 00:18:07.962 --- 10.0.0.1 ping statistics --- 00:18:07.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.962 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:07.962 13:02:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.962 13:02:48 -- nvmf/common.sh@421 -- # return 0 00:18:07.962 13:02:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:07.962 13:02:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.962 13:02:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:07.962 13:02:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:07.962 13:02:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.962 13:02:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:07.962 13:02:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:07.962 13:02:48 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90086 00:18:07.962 13:02:48 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:07.962 13:02:48 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:07.962 13:02:48 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90086 00:18:07.962 13:02:48 -- common/autotest_common.sh@829 -- # '[' -z 90086 ']' 00:18:07.962 13:02:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.962 13:02:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.962 13:02:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.962 13:02:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.962 13:02:48 -- common/autotest_common.sh@10 -- # set +x 00:18:09.338 13:02:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.338 13:02:49 -- common/autotest_common.sh@862 -- # return 0 00:18:09.339 13:02:49 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.339 13:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.339 13:02:49 -- common/autotest_common.sh@10 -- # set +x 00:18:09.339 13:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.339 13:02:49 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:09.339 13:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.339 13:02:49 -- common/autotest_common.sh@10 -- # set +x 00:18:09.339 Malloc0 00:18:09.339 13:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.339 13:02:49 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.339 13:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.339 13:02:49 -- common/autotest_common.sh@10 -- # set +x 00:18:09.339 13:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.339 13:02:49 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:09.339 13:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.339 13:02:49 -- common/autotest_common.sh@10 -- # set +x 00:18:09.339 13:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.339 13:02:49 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.339 13:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.339 13:02:49 -- common/autotest_common.sh@10 -- # set +x 00:18:09.339 13:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.339 13:02:49 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:09.339 13:02:49 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:09.339 Shutting down the fuzz application 00:18:09.339 13:02:50 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:09.597 Shutting down the fuzz application 00:18:09.597 13:02:50 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.597 13:02:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.597 13:02:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.597 13:02:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.597 13:02:50 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:09.597 13:02:50 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:09.597 13:02:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:09.597 13:02:50 -- nvmf/common.sh@116 -- # sync 00:18:09.856 13:02:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:09.856 13:02:50 -- nvmf/common.sh@119 -- # set +e 00:18:09.856 13:02:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:09.856 13:02:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:09.856 rmmod nvme_tcp 00:18:09.856 rmmod nvme_fabrics 00:18:09.856 rmmod nvme_keyring 00:18:09.856 13:02:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:09.856 13:02:50 -- nvmf/common.sh@123 -- # set -e 00:18:09.856 13:02:50 -- nvmf/common.sh@124 -- # return 0 00:18:09.856 13:02:50 -- nvmf/common.sh@477 -- # '[' -n 90086 ']' 00:18:09.856 13:02:50 -- nvmf/common.sh@478 -- # killprocess 90086 00:18:09.856 13:02:50 -- common/autotest_common.sh@936 -- # '[' -z 90086 ']' 00:18:09.856 13:02:50 -- common/autotest_common.sh@940 -- # kill -0 90086 00:18:09.856 13:02:50 -- common/autotest_common.sh@941 -- # uname 00:18:09.856 13:02:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.856 13:02:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90086 00:18:09.856 13:02:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:09.856 13:02:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:09.856 13:02:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90086' 00:18:09.856 killing process with pid 90086 00:18:09.856 13:02:50 -- common/autotest_common.sh@955 -- # kill 90086 00:18:09.856 13:02:50 -- common/autotest_common.sh@960 -- # wait 90086 00:18:10.115 13:02:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:10.115 13:02:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:10.115 13:02:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:10.115 13:02:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.115 13:02:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:10.115 13:02:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.115 13:02:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.115 13:02:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.115 13:02:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:10.115 13:02:50 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:10.115 00:18:10.115 real 0m2.665s 00:18:10.115 user 0m2.768s 00:18:10.115 sys 0m0.672s 00:18:10.115 13:02:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:10.115 ************************************ 00:18:10.115 13:02:50 -- common/autotest_common.sh@10 -- # set +x 00:18:10.115 END TEST nvmf_fuzz 00:18:10.115 ************************************ 00:18:10.115 13:02:50 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:10.115 13:02:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:10.115 13:02:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:10.115 13:02:50 -- common/autotest_common.sh@10 -- # set +x 00:18:10.115 ************************************ 00:18:10.115 START TEST nvmf_multiconnection 00:18:10.115 ************************************ 00:18:10.115 13:02:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:10.115 * Looking for test storage... 00:18:10.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:10.115 13:02:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:10.115 13:02:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:10.115 13:02:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:10.374 13:02:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:10.374 13:02:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:10.374 13:02:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:10.374 13:02:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:10.374 13:02:50 -- scripts/common.sh@335 -- # IFS=.-: 00:18:10.374 13:02:50 -- scripts/common.sh@335 -- # read -ra ver1 00:18:10.374 13:02:50 -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.374 13:02:50 -- scripts/common.sh@336 -- # read -ra ver2 00:18:10.374 13:02:50 -- scripts/common.sh@337 -- # local 'op=<' 00:18:10.374 13:02:50 -- scripts/common.sh@339 -- # ver1_l=2 00:18:10.374 13:02:50 -- scripts/common.sh@340 -- # ver2_l=1 00:18:10.374 13:02:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:10.374 13:02:50 -- scripts/common.sh@343 -- # case "$op" in 00:18:10.374 13:02:50 -- scripts/common.sh@344 -- # : 1 00:18:10.374 13:02:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:10.374 13:02:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.374 13:02:50 -- scripts/common.sh@364 -- # decimal 1 00:18:10.374 13:02:50 -- scripts/common.sh@352 -- # local d=1 00:18:10.374 13:02:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.374 13:02:50 -- scripts/common.sh@354 -- # echo 1 00:18:10.374 13:02:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:10.374 13:02:50 -- scripts/common.sh@365 -- # decimal 2 00:18:10.374 13:02:50 -- scripts/common.sh@352 -- # local d=2 00:18:10.374 13:02:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.374 13:02:50 -- scripts/common.sh@354 -- # echo 2 00:18:10.374 13:02:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:10.374 13:02:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:10.374 13:02:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:10.374 13:02:50 -- scripts/common.sh@367 -- # return 0 00:18:10.374 13:02:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.374 13:02:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:10.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.374 --rc genhtml_branch_coverage=1 00:18:10.374 --rc genhtml_function_coverage=1 00:18:10.374 --rc genhtml_legend=1 00:18:10.374 --rc geninfo_all_blocks=1 00:18:10.374 --rc geninfo_unexecuted_blocks=1 00:18:10.374 00:18:10.374 ' 00:18:10.374 13:02:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:10.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.374 --rc genhtml_branch_coverage=1 00:18:10.374 --rc genhtml_function_coverage=1 00:18:10.374 --rc genhtml_legend=1 00:18:10.375 --rc geninfo_all_blocks=1 00:18:10.375 --rc geninfo_unexecuted_blocks=1 00:18:10.375 00:18:10.375 ' 00:18:10.375 13:02:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:10.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.375 --rc genhtml_branch_coverage=1 00:18:10.375 --rc genhtml_function_coverage=1 00:18:10.375 --rc genhtml_legend=1 00:18:10.375 --rc geninfo_all_blocks=1 00:18:10.375 --rc geninfo_unexecuted_blocks=1 00:18:10.375 00:18:10.375 ' 00:18:10.375 13:02:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:10.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.375 --rc genhtml_branch_coverage=1 00:18:10.375 --rc genhtml_function_coverage=1 00:18:10.375 --rc genhtml_legend=1 00:18:10.375 --rc geninfo_all_blocks=1 00:18:10.375 --rc geninfo_unexecuted_blocks=1 00:18:10.375 00:18:10.375 ' 00:18:10.375 13:02:50 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:10.375 13:02:50 -- nvmf/common.sh@7 -- # uname -s 00:18:10.375 13:02:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.375 13:02:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.375 13:02:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.375 13:02:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.375 13:02:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.375 13:02:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.375 13:02:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.375 13:02:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.375 13:02:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.375 13:02:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.375 13:02:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:18:10.375 13:02:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:18:10.375 13:02:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.375 13:02:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.375 13:02:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:10.375 13:02:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:10.375 13:02:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.375 13:02:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.375 13:02:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.375 13:02:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.375 13:02:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.375 13:02:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.375 13:02:50 -- paths/export.sh@5 -- # export PATH 00:18:10.375 13:02:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.375 13:02:50 -- nvmf/common.sh@46 -- # : 0 00:18:10.375 13:02:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:10.375 13:02:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:10.375 13:02:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:10.375 13:02:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.375 13:02:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.375 13:02:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:10.375 13:02:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:10.375 13:02:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:10.375 13:02:51 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.375 13:02:51 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.375 13:02:51 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:10.375 13:02:51 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:10.375 13:02:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:10.375 13:02:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.375 13:02:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:10.375 13:02:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:10.375 13:02:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:10.375 13:02:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.375 13:02:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.375 13:02:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.375 13:02:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:10.375 13:02:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:10.375 13:02:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:10.375 13:02:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:10.375 13:02:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:10.375 13:02:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:10.375 13:02:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.375 13:02:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.375 13:02:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:10.375 13:02:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:10.375 13:02:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:10.375 13:02:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:10.375 13:02:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:10.375 13:02:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.375 13:02:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:10.375 13:02:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:10.375 13:02:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:10.375 13:02:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:10.375 13:02:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:10.375 13:02:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:10.375 Cannot find device "nvmf_tgt_br" 00:18:10.375 13:02:51 -- nvmf/common.sh@154 -- # true 00:18:10.375 13:02:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:10.375 Cannot find device "nvmf_tgt_br2" 00:18:10.375 13:02:51 -- nvmf/common.sh@155 -- # true 00:18:10.375 13:02:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:10.375 13:02:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:10.375 Cannot find device "nvmf_tgt_br" 00:18:10.375 13:02:51 -- nvmf/common.sh@157 -- # true 00:18:10.375 13:02:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:10.375 Cannot find device "nvmf_tgt_br2" 00:18:10.375 13:02:51 -- nvmf/common.sh@158 -- # true 00:18:10.375 13:02:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:10.375 13:02:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:10.375 13:02:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.375 13:02:51 -- nvmf/common.sh@161 -- # true 00:18:10.375 13:02:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.375 13:02:51 -- nvmf/common.sh@162 -- # true 00:18:10.375 13:02:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.375 13:02:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.375 13:02:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.375 13:02:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:10.634 13:02:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:10.634 13:02:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:10.634 13:02:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:10.634 13:02:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:10.634 13:02:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:10.634 13:02:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:10.634 13:02:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:10.634 13:02:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:10.634 13:02:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:10.634 13:02:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.634 13:02:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:10.634 13:02:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:10.634 13:02:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:10.634 13:02:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:10.634 13:02:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.634 13:02:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.634 13:02:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.634 13:02:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.634 13:02:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.634 13:02:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:10.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:18:10.634 00:18:10.634 --- 10.0.0.2 ping statistics --- 00:18:10.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.634 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:10.634 13:02:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:10.634 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.634 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:10.634 00:18:10.634 --- 10.0.0.3 ping statistics --- 00:18:10.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.634 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:10.634 13:02:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:10.634 00:18:10.634 --- 10.0.0.1 ping statistics --- 00:18:10.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.634 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:10.634 13:02:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.634 13:02:51 -- nvmf/common.sh@421 -- # return 0 00:18:10.634 13:02:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:10.634 13:02:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.634 13:02:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:10.634 13:02:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:10.634 13:02:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.634 13:02:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:10.634 13:02:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:10.634 13:02:51 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:10.634 13:02:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:10.634 13:02:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:10.634 13:02:51 -- common/autotest_common.sh@10 -- # set +x 00:18:10.634 13:02:51 -- nvmf/common.sh@469 -- # nvmfpid=90309 00:18:10.634 13:02:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:10.634 13:02:51 -- nvmf/common.sh@470 -- # waitforlisten 90309 00:18:10.634 13:02:51 -- common/autotest_common.sh@829 -- # '[' -z 90309 ']' 00:18:10.634 13:02:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.634 13:02:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.634 13:02:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.635 13:02:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.635 13:02:51 -- common/autotest_common.sh@10 -- # set +x 00:18:10.635 [2024-12-13 13:02:51.385262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:10.635 [2024-12-13 13:02:51.385360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.893 [2024-12-13 13:02:51.524558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.893 [2024-12-13 13:02:51.589030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:10.893 [2024-12-13 13:02:51.589156] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.893 [2024-12-13 13:02:51.589168] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.893 [2024-12-13 13:02:51.589176] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.893 [2024-12-13 13:02:51.589341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.893 [2024-12-13 13:02:51.589718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.893 [2024-12-13 13:02:51.589979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.893 [2024-12-13 13:02:51.589985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.830 13:02:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.830 13:02:52 -- common/autotest_common.sh@862 -- # return 0 00:18:11.830 13:02:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:11.830 13:02:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.830 13:02:52 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 [2024-12-13 13:02:52.362708] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:11.830 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.830 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 Malloc1 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 [2024-12-13 13:02:52.438350] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.830 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 Malloc2 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.830 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 Malloc3 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.830 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 Malloc4 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.830 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:11.830 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 Malloc5 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.090 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 Malloc6 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.090 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 Malloc7 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.090 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 Malloc8 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.090 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 Malloc9 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.090 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.090 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.090 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:12.090 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.090 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.349 Malloc10 00:18:12.349 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.349 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:12.349 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.349 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.349 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.349 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:12.349 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.349 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.349 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.349 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:12.349 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.349 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.349 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.349 13:02:52 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.349 13:02:52 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:12.349 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.349 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.349 Malloc11 00:18:12.349 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.349 13:02:52 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:12.349 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.349 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.349 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.349 13:02:52 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:12.349 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.349 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.349 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.349 13:02:52 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:12.349 13:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.349 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.349 13:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.349 13:02:52 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:12.349 13:02:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.349 13:02:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.608 13:02:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:12.608 13:02:53 -- common/autotest_common.sh@1187 -- # local i=0 00:18:12.608 13:02:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.608 13:02:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:12.608 13:02:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:14.512 13:02:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:14.512 13:02:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:14.512 13:02:55 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:14.512 13:02:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:14.512 13:02:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.513 13:02:55 -- common/autotest_common.sh@1197 -- # return 0 00:18:14.513 13:02:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.513 13:02:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:14.771 13:02:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:14.771 13:02:55 -- common/autotest_common.sh@1187 -- # local i=0 00:18:14.771 13:02:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.771 13:02:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:14.771 13:02:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:16.673 13:02:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:16.673 13:02:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:16.673 13:02:57 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:16.673 13:02:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:16.673 13:02:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.673 13:02:57 -- common/autotest_common.sh@1197 -- # return 0 00:18:16.673 13:02:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.673 13:02:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:16.931 13:02:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:16.931 13:02:57 -- common/autotest_common.sh@1187 -- # local i=0 00:18:16.931 13:02:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.931 13:02:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:16.931 13:02:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:18.858 13:02:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:18.858 13:02:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:18.858 13:02:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:18.858 13:02:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:18.858 13:02:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.858 13:02:59 -- common/autotest_common.sh@1197 -- # return 0 00:18:18.858 13:02:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.858 13:02:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:19.117 13:02:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:19.117 13:02:59 -- common/autotest_common.sh@1187 -- # local i=0 00:18:19.117 13:02:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.117 13:02:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:19.117 13:02:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:21.020 13:03:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:21.020 13:03:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:21.020 13:03:01 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:21.020 13:03:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:21.020 13:03:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.020 13:03:01 -- common/autotest_common.sh@1197 -- # return 0 00:18:21.020 13:03:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.020 13:03:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:21.279 13:03:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:21.279 13:03:01 -- common/autotest_common.sh@1187 -- # local i=0 00:18:21.279 13:03:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.279 13:03:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:21.279 13:03:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:23.182 13:03:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:23.182 13:03:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:23.182 13:03:03 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:23.182 13:03:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:23.182 13:03:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.182 13:03:03 -- common/autotest_common.sh@1197 -- # return 0 00:18:23.182 13:03:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:23.182 13:03:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:23.441 13:03:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:23.441 13:03:04 -- common/autotest_common.sh@1187 -- # local i=0 00:18:23.441 13:03:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.441 13:03:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:23.441 13:03:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:25.973 13:03:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:25.973 13:03:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:25.973 13:03:06 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:25.973 13:03:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:25.973 13:03:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.973 13:03:06 -- common/autotest_common.sh@1197 -- # return 0 00:18:25.973 13:03:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:25.973 13:03:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:25.973 13:03:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:25.973 13:03:06 -- common/autotest_common.sh@1187 -- # local i=0 00:18:25.973 13:03:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.973 13:03:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:25.973 13:03:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:27.876 13:03:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:27.876 13:03:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:27.876 13:03:08 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:27.876 13:03:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:27.876 13:03:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.876 13:03:08 -- common/autotest_common.sh@1197 -- # return 0 00:18:27.876 13:03:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:27.876 13:03:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:27.876 13:03:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:27.876 13:03:08 -- common/autotest_common.sh@1187 -- # local i=0 00:18:27.876 13:03:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.876 13:03:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:27.876 13:03:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:29.778 13:03:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:29.778 13:03:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:29.778 13:03:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:30.037 13:03:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:30.037 13:03:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.037 13:03:10 -- common/autotest_common.sh@1197 -- # return 0 00:18:30.037 13:03:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:30.037 13:03:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:30.037 13:03:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:30.037 13:03:10 -- common/autotest_common.sh@1187 -- # local i=0 00:18:30.037 13:03:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.037 13:03:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:30.037 13:03:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:32.569 13:03:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:32.569 13:03:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:32.569 13:03:12 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:32.569 13:03:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:32.569 13:03:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.569 13:03:12 -- common/autotest_common.sh@1197 -- # return 0 00:18:32.569 13:03:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.569 13:03:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:32.569 13:03:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:32.569 13:03:12 -- common/autotest_common.sh@1187 -- # local i=0 00:18:32.569 13:03:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.569 13:03:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:32.569 13:03:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:34.474 13:03:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:34.474 13:03:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:34.474 13:03:14 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:34.474 13:03:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:34.474 13:03:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.474 13:03:14 -- common/autotest_common.sh@1197 -- # return 0 00:18:34.474 13:03:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.474 13:03:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:34.474 13:03:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:34.474 13:03:15 -- common/autotest_common.sh@1187 -- # local i=0 00:18:34.474 13:03:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.474 13:03:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:34.474 13:03:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:37.008 13:03:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:37.008 13:03:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:37.008 13:03:17 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:37.008 13:03:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:37.008 13:03:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.008 13:03:17 -- common/autotest_common.sh@1197 -- # return 0 00:18:37.008 13:03:17 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:37.008 [global] 00:18:37.008 thread=1 00:18:37.008 invalidate=1 00:18:37.008 rw=read 00:18:37.008 time_based=1 00:18:37.008 runtime=10 00:18:37.008 ioengine=libaio 00:18:37.008 direct=1 00:18:37.008 bs=262144 00:18:37.008 iodepth=64 00:18:37.008 norandommap=1 00:18:37.008 numjobs=1 00:18:37.008 00:18:37.008 [job0] 00:18:37.008 filename=/dev/nvme0n1 00:18:37.008 [job1] 00:18:37.008 filename=/dev/nvme10n1 00:18:37.008 [job2] 00:18:37.008 filename=/dev/nvme1n1 00:18:37.008 [job3] 00:18:37.008 filename=/dev/nvme2n1 00:18:37.008 [job4] 00:18:37.008 filename=/dev/nvme3n1 00:18:37.008 [job5] 00:18:37.008 filename=/dev/nvme4n1 00:18:37.008 [job6] 00:18:37.008 filename=/dev/nvme5n1 00:18:37.008 [job7] 00:18:37.008 filename=/dev/nvme6n1 00:18:37.008 [job8] 00:18:37.008 filename=/dev/nvme7n1 00:18:37.008 [job9] 00:18:37.008 filename=/dev/nvme8n1 00:18:37.008 [job10] 00:18:37.008 filename=/dev/nvme9n1 00:18:37.008 Could not set queue depth (nvme0n1) 00:18:37.008 Could not set queue depth (nvme10n1) 00:18:37.008 Could not set queue depth (nvme1n1) 00:18:37.008 Could not set queue depth (nvme2n1) 00:18:37.008 Could not set queue depth (nvme3n1) 00:18:37.008 Could not set queue depth (nvme4n1) 00:18:37.008 Could not set queue depth (nvme5n1) 00:18:37.008 Could not set queue depth (nvme6n1) 00:18:37.008 Could not set queue depth (nvme7n1) 00:18:37.008 Could not set queue depth (nvme8n1) 00:18:37.008 Could not set queue depth (nvme9n1) 00:18:37.008 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.008 fio-3.35 00:18:37.008 Starting 11 threads 00:18:49.216 00:18:49.216 job0: (groupid=0, jobs=1): err= 0: pid=90781: Fri Dec 13 13:03:27 2024 00:18:49.216 read: IOPS=420, BW=105MiB/s (110MB/s)(1064MiB/10134msec) 00:18:49.216 slat (usec): min=15, max=107202, avg=2233.76, stdev=9147.64 00:18:49.216 clat (msec): min=20, max=296, avg=149.79, stdev=40.10 00:18:49.216 lat (msec): min=20, max=296, avg=152.02, stdev=41.45 00:18:49.216 clat percentiles (msec): 00:18:49.216 | 1.00th=[ 47], 5.00th=[ 79], 10.00th=[ 92], 20.00th=[ 109], 00:18:49.216 | 30.00th=[ 144], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 167], 00:18:49.216 | 70.00th=[ 171], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 197], 00:18:49.216 | 99.00th=[ 262], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:18:49.216 | 99.99th=[ 296] 00:18:49.216 bw ( KiB/s): min=78848, max=173056, per=6.76%, avg=107310.10, stdev=26245.11, samples=20 00:18:49.216 iops : min= 308, max= 676, avg=419.10, stdev=102.47, samples=20 00:18:49.216 lat (msec) : 50=1.43%, 100=14.73%, 250=82.59%, 500=1.25% 00:18:49.216 cpu : usr=0.21%, sys=1.40%, ctx=837, majf=0, minf=4097 00:18:49.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:49.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.216 issued rwts: total=4257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.216 job1: (groupid=0, jobs=1): err= 0: pid=90782: Fri Dec 13 13:03:27 2024 00:18:49.216 read: IOPS=598, BW=150MiB/s (157MB/s)(1507MiB/10068msec) 00:18:49.216 slat (usec): min=14, max=127507, avg=1575.94, stdev=6451.63 00:18:49.216 clat (msec): min=5, max=352, avg=105.18, stdev=29.05 00:18:49.216 lat (msec): min=5, max=352, avg=106.76, stdev=29.87 00:18:49.216 clat percentiles (msec): 00:18:49.216 | 1.00th=[ 33], 5.00th=[ 62], 10.00th=[ 80], 20.00th=[ 89], 00:18:49.216 | 30.00th=[ 95], 40.00th=[ 100], 50.00th=[ 104], 60.00th=[ 108], 00:18:49.216 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 133], 95.00th=[ 171], 00:18:49.216 | 99.00th=[ 197], 99.50th=[ 207], 99.90th=[ 215], 99.95th=[ 218], 00:18:49.216 | 99.99th=[ 351] 00:18:49.216 bw ( KiB/s): min=88064, max=218187, per=9.62%, avg=152673.10, stdev=26876.00, samples=20 00:18:49.216 iops : min= 344, max= 852, avg=596.20, stdev=104.89, samples=20 00:18:49.216 lat (msec) : 10=0.30%, 20=0.20%, 50=2.87%, 100=38.43%, 250=58.19% 00:18:49.216 lat (msec) : 500=0.02% 00:18:49.216 cpu : usr=0.29%, sys=2.02%, ctx=1116, majf=0, minf=4097 00:18:49.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:49.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.216 issued rwts: total=6027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.216 job2: (groupid=0, jobs=1): err= 0: pid=90783: Fri Dec 13 13:03:27 2024 00:18:49.216 read: IOPS=603, BW=151MiB/s (158MB/s)(1528MiB/10132msec) 00:18:49.216 slat (usec): min=18, max=154952, avg=1551.00, stdev=7323.30 00:18:49.216 clat (msec): min=9, max=336, avg=104.30, stdev=57.57 00:18:49.216 lat (msec): min=9, max=388, avg=105.85, stdev=58.59 00:18:49.216 clat percentiles (msec): 00:18:49.216 | 1.00th=[ 13], 5.00th=[ 20], 10.00th=[ 25], 20.00th=[ 34], 00:18:49.216 | 30.00th=[ 89], 40.00th=[ 95], 50.00th=[ 102], 60.00th=[ 109], 00:18:49.216 | 70.00th=[ 121], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 192], 00:18:49.216 | 99.00th=[ 253], 99.50th=[ 317], 99.90th=[ 330], 99.95th=[ 338], 00:18:49.216 | 99.99th=[ 338] 00:18:49.216 bw ( KiB/s): min=72192, max=596480, per=9.75%, avg=154811.10, stdev=109328.24, samples=20 00:18:49.216 iops : min= 282, max= 2330, avg=604.65, stdev=427.09, samples=20 00:18:49.216 lat (msec) : 10=0.20%, 20=5.09%, 50=17.36%, 100=24.98%, 250=51.00% 00:18:49.216 lat (msec) : 500=1.37% 00:18:49.216 cpu : usr=0.24%, sys=2.07%, ctx=1056, majf=0, minf=4097 00:18:49.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:49.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.216 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.216 job3: (groupid=0, jobs=1): err= 0: pid=90784: Fri Dec 13 13:03:27 2024 00:18:49.216 read: IOPS=470, BW=118MiB/s (123MB/s)(1184MiB/10075msec) 00:18:49.216 slat (usec): min=15, max=106886, avg=2046.97, stdev=8708.06 00:18:49.216 clat (msec): min=2, max=239, avg=133.84, stdev=43.27 00:18:49.216 lat (msec): min=2, max=289, avg=135.89, stdev=44.65 00:18:49.216 clat percentiles (msec): 00:18:49.216 | 1.00th=[ 7], 5.00th=[ 75], 10.00th=[ 84], 20.00th=[ 97], 00:18:49.216 | 30.00th=[ 104], 40.00th=[ 120], 50.00th=[ 144], 60.00th=[ 159], 00:18:49.216 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:18:49.216 | 99.00th=[ 203], 99.50th=[ 209], 99.90th=[ 220], 99.95th=[ 228], 00:18:49.216 | 99.99th=[ 241] 00:18:49.216 bw ( KiB/s): min=87040, max=179712, per=7.54%, avg=119599.40, stdev=32997.77, samples=20 00:18:49.216 iops : min= 340, max= 702, avg=467.10, stdev=128.80, samples=20 00:18:49.216 lat (msec) : 4=0.25%, 10=1.41%, 20=0.84%, 50=1.18%, 100=21.70% 00:18:49.216 lat (msec) : 250=74.60% 00:18:49.216 cpu : usr=0.21%, sys=1.53%, ctx=909, majf=0, minf=4098 00:18:49.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:49.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.216 issued rwts: total=4737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.216 job4: (groupid=0, jobs=1): err= 0: pid=90785: Fri Dec 13 13:03:27 2024 00:18:49.216 read: IOPS=778, BW=195MiB/s (204MB/s)(1972MiB/10133msec) 00:18:49.216 slat (usec): min=15, max=81911, avg=1249.01, stdev=5469.87 00:18:49.216 clat (msec): min=3, max=318, avg=80.78, stdev=47.05 00:18:49.216 lat (msec): min=3, max=318, avg=82.03, stdev=47.95 00:18:49.216 clat percentiles (msec): 00:18:49.216 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 27], 20.00th=[ 32], 00:18:49.216 | 30.00th=[ 44], 40.00th=[ 69], 50.00th=[ 82], 60.00th=[ 91], 00:18:49.216 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 157], 95.00th=[ 171], 00:18:49.216 | 99.00th=[ 186], 99.50th=[ 211], 99.90th=[ 317], 99.95th=[ 317], 00:18:49.216 | 99.99th=[ 317] 00:18:49.216 bw ( KiB/s): min=85333, max=544768, per=12.62%, avg=200248.85, stdev=120519.50, samples=20 00:18:49.216 iops : min= 333, max= 2128, avg=782.15, stdev=470.76, samples=20 00:18:49.216 lat (msec) : 4=0.06%, 10=1.15%, 20=2.80%, 50=27.98%, 100=36.57% 00:18:49.216 lat (msec) : 250=31.01%, 500=0.43% 00:18:49.216 cpu : usr=0.31%, sys=2.41%, ctx=1474, majf=0, minf=4097 00:18:49.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:49.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.216 issued rwts: total=7889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.216 job5: (groupid=0, jobs=1): err= 0: pid=90786: Fri Dec 13 13:03:27 2024 00:18:49.216 read: IOPS=463, BW=116MiB/s (121MB/s)(1167MiB/10073msec) 00:18:49.216 slat (usec): min=20, max=105721, avg=2137.58, stdev=8232.78 00:18:49.216 clat (msec): min=19, max=244, avg=135.76, stdev=39.46 00:18:49.216 lat (msec): min=19, max=303, avg=137.90, stdev=40.72 00:18:49.216 clat percentiles (msec): 00:18:49.216 | 1.00th=[ 27], 5.00th=[ 80], 10.00th=[ 91], 20.00th=[ 97], 00:18:49.216 | 30.00th=[ 104], 40.00th=[ 117], 50.00th=[ 146], 60.00th=[ 159], 00:18:49.216 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:18:49.216 | 99.00th=[ 207], 99.50th=[ 213], 99.90th=[ 241], 99.95th=[ 243], 00:18:49.216 | 99.99th=[ 245] 00:18:49.216 bw ( KiB/s): min=86528, max=184832, per=7.42%, avg=117808.90, stdev=32188.46, samples=20 00:18:49.216 iops : min= 338, max= 722, avg=460.10, stdev=125.64, samples=20 00:18:49.216 lat (msec) : 20=0.04%, 50=1.26%, 100=23.66%, 250=75.03% 00:18:49.216 cpu : usr=0.16%, sys=1.59%, ctx=825, majf=0, minf=4097 00:18:49.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:49.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.216 issued rwts: total=4666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.216 job6: (groupid=0, jobs=1): err= 0: pid=90787: Fri Dec 13 13:03:27 2024 00:18:49.216 read: IOPS=544, BW=136MiB/s (143MB/s)(1379MiB/10121msec) 00:18:49.216 slat (usec): min=15, max=189893, avg=1740.12, stdev=7862.57 00:18:49.216 clat (msec): min=2, max=269, avg=115.54, stdev=59.02 00:18:49.217 lat (msec): min=2, max=383, avg=117.28, stdev=60.31 00:18:49.217 clat percentiles (msec): 00:18:49.217 | 1.00th=[ 12], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 54], 00:18:49.217 | 30.00th=[ 65], 40.00th=[ 83], 50.00th=[ 108], 60.00th=[ 155], 00:18:49.217 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:18:49.217 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 259], 99.95th=[ 271], 00:18:49.217 | 99.99th=[ 271] 00:18:49.217 bw ( KiB/s): min=86016, max=323719, per=8.80%, avg=139595.95, stdev=71352.79, samples=20 00:18:49.217 iops : min= 336, max= 1264, avg=545.05, stdev=278.67, samples=20 00:18:49.217 lat (msec) : 4=0.07%, 10=0.34%, 20=2.16%, 50=12.91%, 100=32.93% 00:18:49.217 lat (msec) : 250=51.23%, 500=0.34% 00:18:49.217 cpu : usr=0.25%, sys=1.76%, ctx=1065, majf=0, minf=4097 00:18:49.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:49.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.217 issued rwts: total=5514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.217 job7: (groupid=0, jobs=1): err= 0: pid=90788: Fri Dec 13 13:03:27 2024 00:18:49.217 read: IOPS=508, BW=127MiB/s (133MB/s)(1288MiB/10130msec) 00:18:49.217 slat (usec): min=14, max=103755, avg=1875.76, stdev=8060.31 00:18:49.217 clat (msec): min=3, max=298, avg=123.67, stdev=59.37 00:18:49.217 lat (msec): min=3, max=298, avg=125.55, stdev=60.66 00:18:49.217 clat percentiles (msec): 00:18:49.217 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 58], 00:18:49.217 | 30.00th=[ 70], 40.00th=[ 91], 50.00th=[ 153], 60.00th=[ 161], 00:18:49.217 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 192], 00:18:49.217 | 99.00th=[ 232], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 300], 00:18:49.217 | 99.99th=[ 300] 00:18:49.217 bw ( KiB/s): min=82778, max=293888, per=8.21%, avg=130290.85, stdev=68400.64, samples=20 00:18:49.217 iops : min= 323, max= 1148, avg=508.85, stdev=267.10, samples=20 00:18:49.217 lat (msec) : 4=0.06%, 10=0.85%, 20=1.13%, 50=10.85%, 100=28.88% 00:18:49.217 lat (msec) : 250=57.58%, 500=0.66% 00:18:49.217 cpu : usr=0.20%, sys=1.62%, ctx=1019, majf=0, minf=4097 00:18:49.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:49.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.217 issued rwts: total=5153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.217 job8: (groupid=0, jobs=1): err= 0: pid=90789: Fri Dec 13 13:03:27 2024 00:18:49.217 read: IOPS=616, BW=154MiB/s (162MB/s)(1553MiB/10075msec) 00:18:49.217 slat (usec): min=13, max=122878, avg=1570.41, stdev=7026.28 00:18:49.217 clat (msec): min=4, max=312, avg=102.03, stdev=50.52 00:18:49.217 lat (msec): min=4, max=312, avg=103.60, stdev=51.69 00:18:49.217 clat percentiles (msec): 00:18:49.217 | 1.00th=[ 10], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 63], 00:18:49.217 | 30.00th=[ 78], 40.00th=[ 88], 50.00th=[ 95], 60.00th=[ 106], 00:18:49.217 | 70.00th=[ 120], 80.00th=[ 161], 90.00th=[ 176], 95.00th=[ 184], 00:18:49.217 | 99.00th=[ 199], 99.50th=[ 207], 99.90th=[ 241], 99.95th=[ 279], 00:18:49.217 | 99.99th=[ 313] 00:18:49.217 bw ( KiB/s): min=86528, max=352768, per=9.91%, avg=157294.70, stdev=75232.35, samples=20 00:18:49.217 iops : min= 338, max= 1378, avg=614.25, stdev=293.80, samples=20 00:18:49.217 lat (msec) : 10=1.51%, 20=4.20%, 50=9.43%, 100=39.06%, 250=45.69% 00:18:49.217 lat (msec) : 500=0.10% 00:18:49.217 cpu : usr=0.19%, sys=2.02%, ctx=1171, majf=0, minf=4097 00:18:49.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:49.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.217 issued rwts: total=6211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.217 job9: (groupid=0, jobs=1): err= 0: pid=90790: Fri Dec 13 13:03:27 2024 00:18:49.217 read: IOPS=589, BW=147MiB/s (155MB/s)(1493MiB/10129msec) 00:18:49.217 slat (usec): min=16, max=136205, avg=1630.92, stdev=7140.17 00:18:49.217 clat (usec): min=518, max=297726, avg=106687.18, stdev=65251.82 00:18:49.217 lat (usec): min=961, max=318091, avg=108318.11, stdev=66525.91 00:18:49.217 clat percentiles (msec): 00:18:49.217 | 1.00th=[ 4], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 48], 00:18:49.217 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 82], 60.00th=[ 155], 00:18:49.217 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 197], 00:18:49.217 | 99.00th=[ 220], 99.50th=[ 236], 99.90th=[ 292], 99.95th=[ 292], 00:18:49.217 | 99.99th=[ 296] 00:18:49.217 bw ( KiB/s): min=79360, max=323072, per=9.53%, avg=151275.65, stdev=88015.02, samples=20 00:18:49.217 iops : min= 310, max= 1262, avg=590.85, stdev=343.86, samples=20 00:18:49.217 lat (usec) : 750=0.02%, 1000=0.03% 00:18:49.217 lat (msec) : 2=0.25%, 4=0.80%, 10=1.05%, 20=4.25%, 50=15.35% 00:18:49.217 lat (msec) : 100=34.04%, 250=43.98%, 500=0.22% 00:18:49.217 cpu : usr=0.23%, sys=1.83%, ctx=1143, majf=0, minf=4097 00:18:49.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:49.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.217 issued rwts: total=5973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.217 job10: (groupid=0, jobs=1): err= 0: pid=90792: Fri Dec 13 13:03:27 2024 00:18:49.217 read: IOPS=621, BW=155MiB/s (163MB/s)(1573MiB/10127msec) 00:18:49.217 slat (usec): min=14, max=175793, avg=1501.25, stdev=6267.27 00:18:49.217 clat (msec): min=19, max=317, avg=101.35, stdev=46.56 00:18:49.217 lat (msec): min=20, max=355, avg=102.85, stdev=47.43 00:18:49.217 clat percentiles (msec): 00:18:49.217 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 58], 00:18:49.217 | 30.00th=[ 67], 40.00th=[ 86], 50.00th=[ 93], 60.00th=[ 101], 00:18:49.217 | 70.00th=[ 111], 80.00th=[ 155], 90.00th=[ 178], 95.00th=[ 186], 00:18:49.217 | 99.00th=[ 224], 99.50th=[ 228], 99.90th=[ 251], 99.95th=[ 317], 00:18:49.217 | 99.99th=[ 317] 00:18:49.217 bw ( KiB/s): min=87377, max=305053, per=10.04%, avg=159374.15, stdev=64531.04, samples=20 00:18:49.217 iops : min= 341, max= 1191, avg=622.40, stdev=251.97, samples=20 00:18:49.217 lat (msec) : 20=0.02%, 50=7.41%, 100=51.45%, 250=41.05%, 500=0.08% 00:18:49.217 cpu : usr=0.20%, sys=2.00%, ctx=1240, majf=0, minf=4097 00:18:49.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:49.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.217 issued rwts: total=6290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.217 00:18:49.217 Run status group 0 (all jobs): 00:18:49.217 READ: bw=1550MiB/s (1625MB/s), 105MiB/s-195MiB/s (110MB/s-204MB/s), io=15.3GiB (16.5GB), run=10068-10134msec 00:18:49.217 00:18:49.217 Disk stats (read/write): 00:18:49.217 nvme0n1: ios=8444/0, merge=0/0, ticks=1238939/0, in_queue=1238939, util=97.51% 00:18:49.217 nvme10n1: ios=11944/0, merge=0/0, ticks=1240976/0, in_queue=1240976, util=97.53% 00:18:49.217 nvme1n1: ios=12116/0, merge=0/0, ticks=1233635/0, in_queue=1233635, util=97.79% 00:18:49.217 nvme2n1: ios=9395/0, merge=0/0, ticks=1244861/0, in_queue=1244861, util=98.01% 00:18:49.217 nvme3n1: ios=15691/0, merge=0/0, ticks=1233053/0, in_queue=1233053, util=97.94% 00:18:49.217 nvme4n1: ios=9257/0, merge=0/0, ticks=1242657/0, in_queue=1242657, util=98.27% 00:18:49.217 nvme5n1: ios=10901/0, merge=0/0, ticks=1232648/0, in_queue=1232648, util=98.12% 00:18:49.217 nvme6n1: ios=10208/0, merge=0/0, ticks=1238099/0, in_queue=1238099, util=98.24% 00:18:49.217 nvme7n1: ios=12309/0, merge=0/0, ticks=1238232/0, in_queue=1238232, util=98.71% 00:18:49.217 nvme8n1: ios=11834/0, merge=0/0, ticks=1233818/0, in_queue=1233818, util=98.63% 00:18:49.217 nvme9n1: ios=12462/0, merge=0/0, ticks=1237507/0, in_queue=1237507, util=98.67% 00:18:49.217 13:03:27 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:49.217 [global] 00:18:49.217 thread=1 00:18:49.217 invalidate=1 00:18:49.217 rw=randwrite 00:18:49.217 time_based=1 00:18:49.217 runtime=10 00:18:49.217 ioengine=libaio 00:18:49.217 direct=1 00:18:49.217 bs=262144 00:18:49.217 iodepth=64 00:18:49.217 norandommap=1 00:18:49.217 numjobs=1 00:18:49.217 00:18:49.217 [job0] 00:18:49.217 filename=/dev/nvme0n1 00:18:49.217 [job1] 00:18:49.217 filename=/dev/nvme10n1 00:18:49.217 [job2] 00:18:49.217 filename=/dev/nvme1n1 00:18:49.217 [job3] 00:18:49.217 filename=/dev/nvme2n1 00:18:49.217 [job4] 00:18:49.217 filename=/dev/nvme3n1 00:18:49.217 [job5] 00:18:49.217 filename=/dev/nvme4n1 00:18:49.217 [job6] 00:18:49.217 filename=/dev/nvme5n1 00:18:49.217 [job7] 00:18:49.217 filename=/dev/nvme6n1 00:18:49.217 [job8] 00:18:49.217 filename=/dev/nvme7n1 00:18:49.217 [job9] 00:18:49.217 filename=/dev/nvme8n1 00:18:49.217 [job10] 00:18:49.217 filename=/dev/nvme9n1 00:18:49.217 Could not set queue depth (nvme0n1) 00:18:49.217 Could not set queue depth (nvme10n1) 00:18:49.217 Could not set queue depth (nvme1n1) 00:18:49.217 Could not set queue depth (nvme2n1) 00:18:49.217 Could not set queue depth (nvme3n1) 00:18:49.217 Could not set queue depth (nvme4n1) 00:18:49.217 Could not set queue depth (nvme5n1) 00:18:49.217 Could not set queue depth (nvme6n1) 00:18:49.217 Could not set queue depth (nvme7n1) 00:18:49.217 Could not set queue depth (nvme8n1) 00:18:49.217 Could not set queue depth (nvme9n1) 00:18:49.217 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.217 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.217 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.217 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.217 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.217 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.217 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.217 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.217 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.218 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.218 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.218 fio-3.35 00:18:49.218 Starting 11 threads 00:18:59.197 00:18:59.197 job0: (groupid=0, jobs=1): err= 0: pid=90992: Fri Dec 13 13:03:38 2024 00:18:59.197 write: IOPS=799, BW=200MiB/s (210MB/s)(2013MiB/10070msec); 0 zone resets 00:18:59.197 slat (usec): min=16, max=9413, avg=1236.52, stdev=2083.25 00:18:59.197 clat (msec): min=11, max=147, avg=78.77, stdev= 6.66 00:18:59.197 lat (msec): min=11, max=147, avg=80.01, stdev= 6.46 00:18:59.197 clat percentiles (msec): 00:18:59.197 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 74], 20.00th=[ 75], 00:18:59.197 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 80], 00:18:59.197 | 70.00th=[ 81], 80.00th=[ 81], 90.00th=[ 81], 95.00th=[ 82], 00:18:59.197 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 138], 99.95th=[ 142], 00:18:59.197 | 99.99th=[ 148] 00:18:59.197 bw ( KiB/s): min=163328, max=209920, per=12.79%, avg=204497.85, stdev=9831.32, samples=20 00:18:59.197 iops : min= 638, max= 820, avg=798.80, stdev=38.40, samples=20 00:18:59.197 lat (msec) : 20=0.07%, 50=0.15%, 100=97.69%, 250=2.09% 00:18:59.197 cpu : usr=1.58%, sys=2.21%, ctx=10471, majf=0, minf=1 00:18:59.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:59.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.197 issued rwts: total=0,8052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.197 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.197 job1: (groupid=0, jobs=1): err= 0: pid=90993: Fri Dec 13 13:03:38 2024 00:18:59.197 write: IOPS=799, BW=200MiB/s (210MB/s)(2013MiB/10072msec); 0 zone resets 00:18:59.197 slat (usec): min=19, max=10723, avg=1236.56, stdev=2086.61 00:18:59.197 clat (msec): min=11, max=150, avg=78.80, stdev= 6.84 00:18:59.197 lat (msec): min=11, max=150, avg=80.04, stdev= 6.64 00:18:59.197 clat percentiles (msec): 00:18:59.197 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 74], 20.00th=[ 75], 00:18:59.197 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 80], 00:18:59.197 | 70.00th=[ 81], 80.00th=[ 81], 90.00th=[ 81], 95.00th=[ 82], 00:18:59.197 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 140], 99.95th=[ 146], 00:18:59.197 | 99.99th=[ 150] 00:18:59.197 bw ( KiB/s): min=162304, max=211033, per=12.78%, avg=204450.95, stdev=10080.08, samples=20 00:18:59.197 iops : min= 634, max= 824, avg=798.60, stdev=39.36, samples=20 00:18:59.197 lat (msec) : 20=0.05%, 50=0.20%, 100=97.53%, 250=2.22% 00:18:59.197 cpu : usr=1.62%, sys=2.14%, ctx=7937, majf=0, minf=1 00:18:59.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:59.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.197 issued rwts: total=0,8051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.197 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.197 job2: (groupid=0, jobs=1): err= 0: pid=91005: Fri Dec 13 13:03:38 2024 00:18:59.197 write: IOPS=303, BW=75.8MiB/s (79.5MB/s)(773MiB/10202msec); 0 zone resets 00:18:59.197 slat (usec): min=23, max=32823, avg=3185.64, stdev=5859.49 00:18:59.197 clat (msec): min=3, max=417, avg=207.80, stdev=43.13 00:18:59.197 lat (msec): min=3, max=417, avg=210.98, stdev=43.47 00:18:59.197 clat percentiles (msec): 00:18:59.198 | 1.00th=[ 26], 5.00th=[ 109], 10.00th=[ 190], 20.00th=[ 205], 00:18:59.198 | 30.00th=[ 209], 40.00th=[ 213], 50.00th=[ 218], 60.00th=[ 222], 00:18:59.198 | 70.00th=[ 224], 80.00th=[ 228], 90.00th=[ 232], 95.00th=[ 234], 00:18:59.198 | 99.00th=[ 309], 99.50th=[ 363], 99.90th=[ 405], 99.95th=[ 418], 00:18:59.198 | 99.99th=[ 418] 00:18:59.198 bw ( KiB/s): min=71680, max=140056, per=4.85%, avg=77556.75, stdev=14845.21, samples=20 00:18:59.198 iops : min= 280, max= 547, avg=302.90, stdev=57.97, samples=20 00:18:59.198 lat (msec) : 4=0.13%, 20=0.52%, 50=1.62%, 100=1.81%, 250=94.44% 00:18:59.198 lat (msec) : 500=1.49% 00:18:59.198 cpu : usr=0.57%, sys=1.00%, ctx=3525, majf=0, minf=1 00:18:59.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:59.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.198 issued rwts: total=0,3093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.198 job3: (groupid=0, jobs=1): err= 0: pid=91006: Fri Dec 13 13:03:38 2024 00:18:59.198 write: IOPS=776, BW=194MiB/s (203MB/s)(1954MiB/10071msec); 0 zone resets 00:18:59.198 slat (usec): min=19, max=28233, avg=1274.18, stdev=2174.87 00:18:59.198 clat (msec): min=31, max=151, avg=81.15, stdev= 5.03 00:18:59.198 lat (msec): min=31, max=151, avg=82.42, stdev= 4.73 00:18:59.198 clat percentiles (msec): 00:18:59.198 | 1.00th=[ 73], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:18:59.198 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 83], 00:18:59.198 | 70.00th=[ 84], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 85], 00:18:59.198 | 99.00th=[ 87], 99.50th=[ 100], 99.90th=[ 140], 99.95th=[ 146], 00:18:59.198 | 99.99th=[ 153] 00:18:59.198 bw ( KiB/s): min=195072, max=200704, per=12.41%, avg=198502.30, stdev=1451.73, samples=20 00:18:59.198 iops : min= 762, max= 784, avg=775.35, stdev= 5.68, samples=20 00:18:59.198 lat (msec) : 50=0.31%, 100=99.21%, 250=0.49% 00:18:59.198 cpu : usr=1.40%, sys=2.15%, ctx=9146, majf=0, minf=1 00:18:59.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:59.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.198 issued rwts: total=0,7817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.198 job4: (groupid=0, jobs=1): err= 0: pid=91007: Fri Dec 13 13:03:38 2024 00:18:59.198 write: IOPS=268, BW=67.2MiB/s (70.5MB/s)(686MiB/10201msec); 0 zone resets 00:18:59.198 slat (usec): min=20, max=53256, avg=3640.43, stdev=6918.86 00:18:59.198 clat (msec): min=17, max=425, avg=234.15, stdev=29.11 00:18:59.198 lat (msec): min=17, max=425, avg=237.79, stdev=28.60 00:18:59.198 clat percentiles (msec): 00:18:59.198 | 1.00th=[ 120], 5.00th=[ 199], 10.00th=[ 209], 20.00th=[ 218], 00:18:59.198 | 30.00th=[ 228], 40.00th=[ 234], 50.00th=[ 239], 60.00th=[ 243], 00:18:59.198 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 259], 00:18:59.198 | 99.00th=[ 330], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 426], 00:18:59.198 | 99.99th=[ 426] 00:18:59.198 bw ( KiB/s): min=61317, max=73728, per=4.29%, avg=68608.60, stdev=2924.28, samples=20 00:18:59.198 iops : min= 239, max= 288, avg=267.95, stdev=11.50, samples=20 00:18:59.198 lat (msec) : 20=0.07%, 50=0.29%, 100=0.44%, 250=80.64%, 500=18.56% 00:18:59.198 cpu : usr=0.61%, sys=0.77%, ctx=2653, majf=0, minf=1 00:18:59.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:18:59.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.198 issued rwts: total=0,2743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.198 job5: (groupid=0, jobs=1): err= 0: pid=91008: Fri Dec 13 13:03:38 2024 00:18:59.198 write: IOPS=277, BW=69.4MiB/s (72.8MB/s)(708MiB/10200msec); 0 zone resets 00:18:59.198 slat (usec): min=21, max=50540, avg=3527.07, stdev=6543.73 00:18:59.198 clat (msec): min=24, max=423, avg=226.88, stdev=29.27 00:18:59.198 lat (msec): min=24, max=423, avg=230.40, stdev=28.91 00:18:59.198 clat percentiles (msec): 00:18:59.198 | 1.00th=[ 102], 5.00th=[ 197], 10.00th=[ 207], 20.00th=[ 213], 00:18:59.198 | 30.00th=[ 220], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 234], 00:18:59.198 | 70.00th=[ 236], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 255], 00:18:59.198 | 99.00th=[ 330], 99.50th=[ 368], 99.90th=[ 409], 99.95th=[ 426], 00:18:59.198 | 99.99th=[ 426] 00:18:59.198 bw ( KiB/s): min=63488, max=73728, per=4.43%, avg=70893.10, stdev=2610.08, samples=20 00:18:59.198 iops : min= 248, max= 288, avg=276.85, stdev=10.21, samples=20 00:18:59.198 lat (msec) : 50=0.42%, 100=0.56%, 250=89.94%, 500=9.07% 00:18:59.198 cpu : usr=0.61%, sys=0.75%, ctx=2567, majf=0, minf=1 00:18:59.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:18:59.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.198 issued rwts: total=0,2832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.198 job6: (groupid=0, jobs=1): err= 0: pid=91009: Fri Dec 13 13:03:38 2024 00:18:59.198 write: IOPS=1478, BW=370MiB/s (388MB/s)(3712MiB/10040msec); 0 zone resets 00:18:59.198 slat (usec): min=15, max=11284, avg=659.20, stdev=1111.23 00:18:59.198 clat (msec): min=3, max=166, avg=42.61, stdev= 3.99 00:18:59.198 lat (msec): min=3, max=166, avg=43.27, stdev= 4.05 00:18:59.198 clat percentiles (msec): 00:18:59.198 | 1.00th=[ 37], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:18:59.198 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 43], 00:18:59.198 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 45], 00:18:59.198 | 99.00th=[ 47], 99.50th=[ 56], 99.90th=[ 84], 99.95th=[ 153], 00:18:59.198 | 99.99th=[ 165] 00:18:59.198 bw ( KiB/s): min=363008, max=384512, per=23.66%, avg=378406.55, stdev=4762.92, samples=20 00:18:59.198 iops : min= 1418, max= 1502, avg=1478.15, stdev=18.60, samples=20 00:18:59.198 lat (msec) : 4=0.03%, 10=0.03%, 20=0.03%, 50=99.26%, 100=0.59% 00:18:59.198 lat (msec) : 250=0.06% 00:18:59.198 cpu : usr=2.38%, sys=3.47%, ctx=19833, majf=0, minf=1 00:18:59.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:59.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.198 issued rwts: total=0,14846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.198 job7: (groupid=0, jobs=1): err= 0: pid=91010: Fri Dec 13 13:03:38 2024 00:18:59.198 write: IOPS=272, BW=68.0MiB/s (71.3MB/s)(694MiB/10205msec); 0 zone resets 00:18:59.198 slat (usec): min=22, max=46297, avg=3600.07, stdev=6702.72 00:18:59.198 clat (msec): min=13, max=419, avg=231.46, stdev=29.88 00:18:59.198 lat (msec): min=13, max=419, avg=235.06, stdev=29.48 00:18:59.198 clat percentiles (msec): 00:18:59.198 | 1.00th=[ 91], 5.00th=[ 199], 10.00th=[ 207], 20.00th=[ 218], 00:18:59.198 | 30.00th=[ 226], 40.00th=[ 232], 50.00th=[ 236], 60.00th=[ 241], 00:18:59.198 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 255], 00:18:59.198 | 99.00th=[ 326], 99.50th=[ 376], 99.90th=[ 405], 99.95th=[ 422], 00:18:59.198 | 99.99th=[ 422] 00:18:59.198 bw ( KiB/s): min=63488, max=76288, per=4.34%, avg=69464.45, stdev=3098.53, samples=20 00:18:59.198 iops : min= 248, max= 298, avg=271.30, stdev=12.11, samples=20 00:18:59.198 lat (msec) : 20=0.04%, 50=0.29%, 100=0.72%, 250=88.58%, 500=10.37% 00:18:59.198 cpu : usr=0.55%, sys=0.69%, ctx=3019, majf=0, minf=1 00:18:59.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:18:59.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.198 issued rwts: total=0,2777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.198 job8: (groupid=0, jobs=1): err= 0: pid=91011: Fri Dec 13 13:03:38 2024 00:18:59.198 write: IOPS=282, BW=70.7MiB/s (74.1MB/s)(721MiB/10199msec); 0 zone resets 00:18:59.198 slat (usec): min=24, max=39421, avg=3463.16, stdev=6288.35 00:18:59.198 clat (msec): min=14, max=440, avg=222.83, stdev=29.35 00:18:59.198 lat (msec): min=14, max=440, avg=226.29, stdev=29.08 00:18:59.198 clat percentiles (msec): 00:18:59.198 | 1.00th=[ 93], 5.00th=[ 197], 10.00th=[ 203], 20.00th=[ 209], 00:18:59.198 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 226], 60.00th=[ 230], 00:18:59.198 | 70.00th=[ 234], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 245], 00:18:59.198 | 99.00th=[ 330], 99.50th=[ 384], 99.90th=[ 426], 99.95th=[ 443], 00:18:59.198 | 99.99th=[ 443] 00:18:59.198 bw ( KiB/s): min=66048, max=77824, per=4.51%, avg=72177.00, stdev=3230.20, samples=20 00:18:59.198 iops : min= 258, max= 304, avg=281.90, stdev=12.59, samples=20 00:18:59.198 lat (msec) : 20=0.03%, 50=0.28%, 100=0.69%, 250=97.26%, 500=1.73% 00:18:59.198 cpu : usr=0.59%, sys=0.87%, ctx=2524, majf=0, minf=1 00:18:59.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:18:59.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.198 issued rwts: total=0,2883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.198 job9: (groupid=0, jobs=1): err= 0: pid=91012: Fri Dec 13 13:03:38 2024 00:18:59.198 write: IOPS=277, BW=69.4MiB/s (72.8MB/s)(709MiB/10204msec); 0 zone resets 00:18:59.198 slat (usec): min=22, max=48118, avg=3524.41, stdev=6508.63 00:18:59.198 clat (msec): min=4, max=440, avg=226.80, stdev=31.61 00:18:59.198 lat (msec): min=4, max=440, avg=230.33, stdev=31.34 00:18:59.198 clat percentiles (msec): 00:18:59.198 | 1.00th=[ 80], 5.00th=[ 197], 10.00th=[ 205], 20.00th=[ 213], 00:18:59.198 | 30.00th=[ 220], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 234], 00:18:59.198 | 70.00th=[ 236], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 255], 00:18:59.198 | 99.00th=[ 330], 99.50th=[ 384], 99.90th=[ 426], 99.95th=[ 439], 00:18:59.198 | 99.99th=[ 439] 00:18:59.198 bw ( KiB/s): min=63488, max=76288, per=4.44%, avg=70930.20, stdev=3416.23, samples=20 00:18:59.198 iops : min= 248, max= 298, avg=277.05, stdev=13.33, samples=20 00:18:59.198 lat (msec) : 10=0.18%, 50=0.42%, 100=0.71%, 250=89.13%, 500=9.56% 00:18:59.198 cpu : usr=0.52%, sys=0.85%, ctx=3697, majf=0, minf=1 00:18:59.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:18:59.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.198 issued rwts: total=0,2834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.198 job10: (groupid=0, jobs=1): err= 0: pid=91013: Fri Dec 13 13:03:38 2024 00:18:59.199 write: IOPS=776, BW=194MiB/s (204MB/s)(1956MiB/10074msec); 0 zone resets 00:18:59.199 slat (usec): min=20, max=31061, avg=1272.80, stdev=2171.85 00:18:59.199 clat (msec): min=3, max=152, avg=81.10, stdev= 6.12 00:18:59.199 lat (msec): min=3, max=152, avg=82.37, stdev= 5.88 00:18:59.199 clat percentiles (msec): 00:18:59.199 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:18:59.199 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 83], 00:18:59.199 | 70.00th=[ 84], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 85], 00:18:59.199 | 99.00th=[ 88], 99.50th=[ 102], 99.90th=[ 142], 99.95th=[ 148], 00:18:59.199 | 99.99th=[ 153] 00:18:59.199 bw ( KiB/s): min=196608, max=200704, per=12.42%, avg=198681.60, stdev=1107.81, samples=20 00:18:59.199 iops : min= 768, max= 784, avg=776.10, stdev= 4.33, samples=20 00:18:59.199 lat (msec) : 4=0.05%, 20=0.15%, 50=0.31%, 100=98.95%, 250=0.54% 00:18:59.199 cpu : usr=1.51%, sys=2.35%, ctx=9076, majf=0, minf=1 00:18:59.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:59.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.199 issued rwts: total=0,7824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.199 00:18:59.199 Run status group 0 (all jobs): 00:18:59.199 WRITE: bw=1562MiB/s (1638MB/s), 67.2MiB/s-370MiB/s (70.5MB/s-388MB/s), io=15.6GiB (16.7GB), run=10040-10205msec 00:18:59.199 00:18:59.199 Disk stats (read/write): 00:18:59.199 nvme0n1: ios=49/15930, merge=0/0, ticks=28/1214100, in_queue=1214128, util=97.59% 00:18:59.199 nvme10n1: ios=49/15936, merge=0/0, ticks=45/1214194, in_queue=1214239, util=97.84% 00:18:59.199 nvme1n1: ios=15/6048, merge=0/0, ticks=15/1207211, in_queue=1207226, util=97.86% 00:18:59.199 nvme2n1: ios=13/15466, merge=0/0, ticks=49/1214292, in_queue=1214341, util=97.98% 00:18:59.199 nvme3n1: ios=13/5353, merge=0/0, ticks=26/1204869, in_queue=1204895, util=98.02% 00:18:59.199 nvme4n1: ios=0/5528, merge=0/0, ticks=0/1205419, in_queue=1205419, util=98.22% 00:18:59.199 nvme5n1: ios=0/29550, merge=0/0, ticks=0/1221936, in_queue=1221936, util=98.50% 00:18:59.199 nvme6n1: ios=0/5418, merge=0/0, ticks=0/1205603, in_queue=1205603, util=98.42% 00:18:59.199 nvme7n1: ios=0/5635, merge=0/0, ticks=0/1205764, in_queue=1205764, util=98.67% 00:18:59.199 nvme8n1: ios=0/5538, merge=0/0, ticks=0/1205995, in_queue=1205995, util=98.90% 00:18:59.199 nvme9n1: ios=0/15485, merge=0/0, ticks=0/1214344, in_queue=1214344, util=98.88% 00:18:59.199 13:03:38 -- target/multiconnection.sh@36 -- # sync 00:18:59.199 13:03:38 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:59.199 13:03:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.199 13:03:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:59.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.199 13:03:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:59.199 13:03:38 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.199 13:03:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.199 13:03:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:18:59.199 13:03:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.199 13:03:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:18:59.199 13:03:38 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.199 13:03:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.199 13:03:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.199 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:18:59.199 13:03:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.199 13:03:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.199 13:03:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:59.199 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:59.199 13:03:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:59.199 13:03:38 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.199 13:03:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.199 13:03:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:18:59.199 13:03:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.199 13:03:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:18:59.199 13:03:38 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.199 13:03:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:59.199 13:03:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.199 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:18:59.199 13:03:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.199 13:03:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.199 13:03:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:59.199 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:59.199 13:03:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:59.199 13:03:39 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:18:59.199 13:03:39 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.199 13:03:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:59.199 13:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.199 13:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:59.199 13:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.199 13:03:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.199 13:03:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:59.199 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:59.199 13:03:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:59.199 13:03:39 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:18:59.199 13:03:39 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.199 13:03:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:59.199 13:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.199 13:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:59.199 13:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.199 13:03:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.199 13:03:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:59.199 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:59.199 13:03:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:59.199 13:03:39 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:18:59.199 13:03:39 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.199 13:03:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:59.199 13:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.199 13:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:59.199 13:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.199 13:03:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.199 13:03:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:59.199 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:59.199 13:03:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:59.199 13:03:39 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:18:59.199 13:03:39 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.199 13:03:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:59.199 13:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.199 13:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:59.199 13:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.199 13:03:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.199 13:03:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:59.199 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:59.199 13:03:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:59.199 13:03:39 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:18:59.199 13:03:39 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.199 13:03:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:59.199 13:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.199 13:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:59.199 13:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.199 13:03:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.199 13:03:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:59.199 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:59.199 13:03:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:59.199 13:03:39 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.199 13:03:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:18:59.199 13:03:39 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.199 13:03:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:59.199 13:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.199 13:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:59.199 13:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.199 13:03:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.199 13:03:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:59.199 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:59.200 13:03:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:59.200 13:03:39 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.200 13:03:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.200 13:03:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:18:59.200 13:03:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.200 13:03:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:18:59.200 13:03:39 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.200 13:03:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:59.200 13:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.200 13:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:59.200 13:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.200 13:03:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.200 13:03:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:59.459 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:59.460 13:03:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:59.460 13:03:39 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.460 13:03:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.460 13:03:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:18:59.460 13:03:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.460 13:03:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:18:59.460 13:03:40 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.460 13:03:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:59.460 13:03:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.460 13:03:40 -- common/autotest_common.sh@10 -- # set +x 00:18:59.460 13:03:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.460 13:03:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.460 13:03:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:59.460 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:59.460 13:03:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:59.460 13:03:40 -- common/autotest_common.sh@1208 -- # local i=0 00:18:59.460 13:03:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:59.460 13:03:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:18:59.460 13:03:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:59.460 13:03:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:18:59.460 13:03:40 -- common/autotest_common.sh@1220 -- # return 0 00:18:59.460 13:03:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:59.460 13:03:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.460 13:03:40 -- common/autotest_common.sh@10 -- # set +x 00:18:59.460 13:03:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.460 13:03:40 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:59.460 13:03:40 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:59.460 13:03:40 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:59.460 13:03:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:59.460 13:03:40 -- nvmf/common.sh@116 -- # sync 00:18:59.460 13:03:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:59.460 13:03:40 -- nvmf/common.sh@119 -- # set +e 00:18:59.460 13:03:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:59.460 13:03:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:59.460 rmmod nvme_tcp 00:18:59.460 rmmod nvme_fabrics 00:18:59.726 rmmod nvme_keyring 00:18:59.726 13:03:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:59.726 13:03:40 -- nvmf/common.sh@123 -- # set -e 00:18:59.726 13:03:40 -- nvmf/common.sh@124 -- # return 0 00:18:59.726 13:03:40 -- nvmf/common.sh@477 -- # '[' -n 90309 ']' 00:18:59.726 13:03:40 -- nvmf/common.sh@478 -- # killprocess 90309 00:18:59.726 13:03:40 -- common/autotest_common.sh@936 -- # '[' -z 90309 ']' 00:18:59.726 13:03:40 -- common/autotest_common.sh@940 -- # kill -0 90309 00:18:59.726 13:03:40 -- common/autotest_common.sh@941 -- # uname 00:18:59.726 13:03:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.726 13:03:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90309 00:18:59.726 13:03:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:59.726 13:03:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:59.726 killing process with pid 90309 00:18:59.726 13:03:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90309' 00:18:59.726 13:03:40 -- common/autotest_common.sh@955 -- # kill 90309 00:18:59.726 13:03:40 -- common/autotest_common.sh@960 -- # wait 90309 00:19:00.001 13:03:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:00.001 13:03:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:00.001 13:03:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:00.001 13:03:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:00.001 13:03:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:00.001 13:03:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.001 13:03:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.001 13:03:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.274 13:03:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:00.274 00:19:00.274 real 0m49.970s 00:19:00.274 user 2m47.763s 00:19:00.274 sys 0m26.317s 00:19:00.274 13:03:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:00.274 13:03:40 -- common/autotest_common.sh@10 -- # set +x 00:19:00.274 ************************************ 00:19:00.274 END TEST nvmf_multiconnection 00:19:00.274 ************************************ 00:19:00.274 13:03:40 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:00.274 13:03:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:00.274 13:03:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:00.274 13:03:40 -- common/autotest_common.sh@10 -- # set +x 00:19:00.274 ************************************ 00:19:00.274 START TEST nvmf_initiator_timeout 00:19:00.274 ************************************ 00:19:00.274 13:03:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:00.274 * Looking for test storage... 00:19:00.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:00.275 13:03:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:00.275 13:03:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:00.275 13:03:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:00.275 13:03:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:00.275 13:03:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:00.275 13:03:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:00.275 13:03:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:00.275 13:03:40 -- scripts/common.sh@335 -- # IFS=.-: 00:19:00.275 13:03:40 -- scripts/common.sh@335 -- # read -ra ver1 00:19:00.275 13:03:40 -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.275 13:03:40 -- scripts/common.sh@336 -- # read -ra ver2 00:19:00.275 13:03:40 -- scripts/common.sh@337 -- # local 'op=<' 00:19:00.275 13:03:40 -- scripts/common.sh@339 -- # ver1_l=2 00:19:00.275 13:03:40 -- scripts/common.sh@340 -- # ver2_l=1 00:19:00.275 13:03:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:00.275 13:03:40 -- scripts/common.sh@343 -- # case "$op" in 00:19:00.275 13:03:40 -- scripts/common.sh@344 -- # : 1 00:19:00.275 13:03:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:00.275 13:03:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.275 13:03:40 -- scripts/common.sh@364 -- # decimal 1 00:19:00.275 13:03:40 -- scripts/common.sh@352 -- # local d=1 00:19:00.275 13:03:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.275 13:03:40 -- scripts/common.sh@354 -- # echo 1 00:19:00.275 13:03:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:00.275 13:03:40 -- scripts/common.sh@365 -- # decimal 2 00:19:00.275 13:03:41 -- scripts/common.sh@352 -- # local d=2 00:19:00.275 13:03:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.275 13:03:41 -- scripts/common.sh@354 -- # echo 2 00:19:00.275 13:03:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:00.275 13:03:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:00.275 13:03:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:00.275 13:03:41 -- scripts/common.sh@367 -- # return 0 00:19:00.275 13:03:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.275 13:03:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:00.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.275 --rc genhtml_branch_coverage=1 00:19:00.275 --rc genhtml_function_coverage=1 00:19:00.275 --rc genhtml_legend=1 00:19:00.275 --rc geninfo_all_blocks=1 00:19:00.275 --rc geninfo_unexecuted_blocks=1 00:19:00.275 00:19:00.275 ' 00:19:00.275 13:03:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:00.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.275 --rc genhtml_branch_coverage=1 00:19:00.275 --rc genhtml_function_coverage=1 00:19:00.275 --rc genhtml_legend=1 00:19:00.275 --rc geninfo_all_blocks=1 00:19:00.275 --rc geninfo_unexecuted_blocks=1 00:19:00.275 00:19:00.275 ' 00:19:00.275 13:03:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:00.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.275 --rc genhtml_branch_coverage=1 00:19:00.275 --rc genhtml_function_coverage=1 00:19:00.275 --rc genhtml_legend=1 00:19:00.275 --rc geninfo_all_blocks=1 00:19:00.275 --rc geninfo_unexecuted_blocks=1 00:19:00.275 00:19:00.275 ' 00:19:00.275 13:03:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:00.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.275 --rc genhtml_branch_coverage=1 00:19:00.275 --rc genhtml_function_coverage=1 00:19:00.275 --rc genhtml_legend=1 00:19:00.275 --rc geninfo_all_blocks=1 00:19:00.275 --rc geninfo_unexecuted_blocks=1 00:19:00.275 00:19:00.275 ' 00:19:00.275 13:03:41 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:00.275 13:03:41 -- nvmf/common.sh@7 -- # uname -s 00:19:00.275 13:03:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.275 13:03:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.275 13:03:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.275 13:03:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.275 13:03:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.275 13:03:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.275 13:03:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.275 13:03:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.275 13:03:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.275 13:03:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.275 13:03:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:19:00.275 13:03:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:19:00.275 13:03:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.275 13:03:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.275 13:03:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:00.275 13:03:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:00.275 13:03:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.275 13:03:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.275 13:03:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.275 13:03:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.275 13:03:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.275 13:03:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.275 13:03:41 -- paths/export.sh@5 -- # export PATH 00:19:00.275 13:03:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.275 13:03:41 -- nvmf/common.sh@46 -- # : 0 00:19:00.275 13:03:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:00.275 13:03:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:00.275 13:03:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:00.275 13:03:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.275 13:03:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.275 13:03:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:00.275 13:03:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:00.275 13:03:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:00.275 13:03:41 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.275 13:03:41 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.275 13:03:41 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:00.275 13:03:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:00.275 13:03:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.275 13:03:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:00.275 13:03:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:00.275 13:03:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:00.275 13:03:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.275 13:03:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.275 13:03:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.275 13:03:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:00.275 13:03:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:00.275 13:03:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:00.275 13:03:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:00.275 13:03:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:00.275 13:03:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:00.275 13:03:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.275 13:03:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.275 13:03:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:00.275 13:03:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:00.275 13:03:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:00.275 13:03:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:00.275 13:03:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:00.275 13:03:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.275 13:03:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:00.275 13:03:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:00.275 13:03:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:00.275 13:03:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:00.275 13:03:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:00.534 13:03:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:00.534 Cannot find device "nvmf_tgt_br" 00:19:00.534 13:03:41 -- nvmf/common.sh@154 -- # true 00:19:00.534 13:03:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.534 Cannot find device "nvmf_tgt_br2" 00:19:00.534 13:03:41 -- nvmf/common.sh@155 -- # true 00:19:00.534 13:03:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:00.534 13:03:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:00.534 Cannot find device "nvmf_tgt_br" 00:19:00.534 13:03:41 -- nvmf/common.sh@157 -- # true 00:19:00.534 13:03:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:00.534 Cannot find device "nvmf_tgt_br2" 00:19:00.534 13:03:41 -- nvmf/common.sh@158 -- # true 00:19:00.534 13:03:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:00.534 13:03:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:00.534 13:03:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.534 13:03:41 -- nvmf/common.sh@161 -- # true 00:19:00.534 13:03:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.534 13:03:41 -- nvmf/common.sh@162 -- # true 00:19:00.534 13:03:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.534 13:03:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.534 13:03:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.534 13:03:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.534 13:03:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.534 13:03:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.534 13:03:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.534 13:03:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:00.534 13:03:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:00.534 13:03:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:00.534 13:03:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:00.534 13:03:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:00.534 13:03:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:00.534 13:03:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.534 13:03:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.534 13:03:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.534 13:03:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:00.534 13:03:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:00.534 13:03:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.534 13:03:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.793 13:03:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.793 13:03:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.793 13:03:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.793 13:03:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:00.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:19:00.793 00:19:00.793 --- 10.0.0.2 ping statistics --- 00:19:00.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.793 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:00.793 13:03:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:00.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:19:00.793 00:19:00.793 --- 10.0.0.3 ping statistics --- 00:19:00.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.793 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:00.793 13:03:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:00.793 00:19:00.793 --- 10.0.0.1 ping statistics --- 00:19:00.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.793 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:00.793 13:03:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.793 13:03:41 -- nvmf/common.sh@421 -- # return 0 00:19:00.793 13:03:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:00.793 13:03:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.793 13:03:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:00.793 13:03:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:00.793 13:03:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.793 13:03:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:00.793 13:03:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:00.793 13:03:41 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:00.793 13:03:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:00.793 13:03:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:00.793 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:19:00.793 13:03:41 -- nvmf/common.sh@469 -- # nvmfpid=91392 00:19:00.793 13:03:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:00.793 13:03:41 -- nvmf/common.sh@470 -- # waitforlisten 91392 00:19:00.793 13:03:41 -- common/autotest_common.sh@829 -- # '[' -z 91392 ']' 00:19:00.793 13:03:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.793 13:03:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.793 13:03:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.793 13:03:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.793 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:19:00.793 [2024-12-13 13:03:41.425332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:00.793 [2024-12-13 13:03:41.425421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.793 [2024-12-13 13:03:41.565591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.052 [2024-12-13 13:03:41.630922] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.052 [2024-12-13 13:03:41.631068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.052 [2024-12-13 13:03:41.631089] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.052 [2024-12-13 13:03:41.631099] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.052 [2024-12-13 13:03:41.631339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.052 [2024-12-13 13:03:41.631451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.052 [2024-12-13 13:03:41.631980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.052 [2024-12-13 13:03:41.631986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.620 13:03:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.620 13:03:42 -- common/autotest_common.sh@862 -- # return 0 00:19:01.620 13:03:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:01.620 13:03:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:01.620 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.879 13:03:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.879 13:03:42 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:01.879 13:03:42 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:01.879 13:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.879 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.879 Malloc0 00:19:01.879 13:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.879 13:03:42 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:01.879 13:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.879 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.879 Delay0 00:19:01.879 13:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.879 13:03:42 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:01.879 13:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.879 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.879 [2024-12-13 13:03:42.472851] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.879 13:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.879 13:03:42 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:01.879 13:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.879 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.879 13:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.879 13:03:42 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:01.879 13:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.879 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.879 13:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.879 13:03:42 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.879 13:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.879 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.879 [2024-12-13 13:03:42.501034] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.879 13:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.879 13:03:42 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.138 13:03:42 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:02.138 13:03:42 -- common/autotest_common.sh@1187 -- # local i=0 00:19:02.138 13:03:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.138 13:03:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:02.138 13:03:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:04.042 13:03:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:04.042 13:03:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:04.042 13:03:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.042 13:03:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:04.042 13:03:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.042 13:03:44 -- common/autotest_common.sh@1197 -- # return 0 00:19:04.042 13:03:44 -- target/initiator_timeout.sh@35 -- # fio_pid=91474 00:19:04.042 13:03:44 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:04.042 13:03:44 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:04.042 [global] 00:19:04.042 thread=1 00:19:04.042 invalidate=1 00:19:04.042 rw=write 00:19:04.042 time_based=1 00:19:04.042 runtime=60 00:19:04.042 ioengine=libaio 00:19:04.042 direct=1 00:19:04.042 bs=4096 00:19:04.042 iodepth=1 00:19:04.042 norandommap=0 00:19:04.042 numjobs=1 00:19:04.042 00:19:04.042 verify_dump=1 00:19:04.042 verify_backlog=512 00:19:04.042 verify_state_save=0 00:19:04.042 do_verify=1 00:19:04.042 verify=crc32c-intel 00:19:04.042 [job0] 00:19:04.042 filename=/dev/nvme0n1 00:19:04.042 Could not set queue depth (nvme0n1) 00:19:04.300 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.300 fio-3.35 00:19:04.300 Starting 1 thread 00:19:07.585 13:03:47 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:07.585 13:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.585 13:03:47 -- common/autotest_common.sh@10 -- # set +x 00:19:07.585 true 00:19:07.585 13:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.585 13:03:47 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:07.585 13:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.585 13:03:47 -- common/autotest_common.sh@10 -- # set +x 00:19:07.585 true 00:19:07.585 13:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.585 13:03:47 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:07.585 13:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.585 13:03:47 -- common/autotest_common.sh@10 -- # set +x 00:19:07.585 true 00:19:07.585 13:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.585 13:03:47 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:07.585 13:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.585 13:03:47 -- common/autotest_common.sh@10 -- # set +x 00:19:07.585 true 00:19:07.585 13:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.585 13:03:47 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:10.117 13:03:50 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:10.117 13:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.117 13:03:50 -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 true 00:19:10.117 13:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.117 13:03:50 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:10.117 13:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.117 13:03:50 -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 true 00:19:10.117 13:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.117 13:03:50 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:10.117 13:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.117 13:03:50 -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 true 00:19:10.117 13:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.117 13:03:50 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:10.117 13:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.117 13:03:50 -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 true 00:19:10.117 13:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.117 13:03:50 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:10.117 13:03:50 -- target/initiator_timeout.sh@54 -- # wait 91474 00:20:06.392 00:20:06.392 job0: (groupid=0, jobs=1): err= 0: pid=91495: Fri Dec 13 13:04:44 2024 00:20:06.392 read: IOPS=776, BW=3107KiB/s (3182kB/s)(182MiB/60000msec) 00:20:06.392 slat (usec): min=10, max=232, avg=14.64, stdev= 5.63 00:20:06.392 clat (usec): min=150, max=7451, avg=207.59, stdev=44.45 00:20:06.392 lat (usec): min=162, max=7464, avg=222.24, stdev=45.21 00:20:06.392 clat percentiles (usec): 00:20:06.393 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 186], 00:20:06.393 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:20:06.393 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 253], 00:20:06.393 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 351], 99.95th=[ 400], 00:20:06.393 | 99.99th=[ 775] 00:20:06.393 write: IOPS=785, BW=3140KiB/s (3216kB/s)(184MiB/60000msec); 0 zone resets 00:20:06.393 slat (usec): min=16, max=11703, avg=23.15, stdev=65.08 00:20:06.393 clat (usec): min=118, max=40702k, avg=1027.42, stdev=187537.80 00:20:06.393 lat (usec): min=136, max=40702k, avg=1050.58, stdev=187537.79 00:20:06.393 clat percentiles (usec): 00:20:06.393 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 145], 00:20:06.393 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:20:06.393 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 202], 00:20:06.393 | 99.00th=[ 225], 99.50th=[ 237], 99.90th=[ 269], 99.95th=[ 383], 00:20:06.393 | 99.99th=[ 1090] 00:20:06.393 bw ( KiB/s): min= 200, max=12288, per=100.00%, avg=9447.67, stdev=1954.14, samples=39 00:20:06.393 iops : min= 50, max= 3072, avg=2361.90, stdev=488.53, samples=39 00:20:06.393 lat (usec) : 250=97.06%, 500=2.90%, 750=0.02%, 1000=0.01% 00:20:06.393 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:20:06.393 cpu : usr=0.58%, sys=2.13%, ctx=93739, majf=0, minf=5 00:20:06.393 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:06.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.393 issued rwts: total=46607,47104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.393 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:06.393 00:20:06.393 Run status group 0 (all jobs): 00:20:06.393 READ: bw=3107KiB/s (3182kB/s), 3107KiB/s-3107KiB/s (3182kB/s-3182kB/s), io=182MiB (191MB), run=60000-60000msec 00:20:06.393 WRITE: bw=3140KiB/s (3216kB/s), 3140KiB/s-3140KiB/s (3216kB/s-3216kB/s), io=184MiB (193MB), run=60000-60000msec 00:20:06.393 00:20:06.393 Disk stats (read/write): 00:20:06.393 nvme0n1: ios=46814/46592, merge=0/0, ticks=10330/8384, in_queue=18714, util=99.79% 00:20:06.393 13:04:44 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:06.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:06.393 13:04:45 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:06.393 13:04:45 -- common/autotest_common.sh@1208 -- # local i=0 00:20:06.393 13:04:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:06.393 13:04:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.393 13:04:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:06.393 13:04:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.393 13:04:45 -- common/autotest_common.sh@1220 -- # return 0 00:20:06.393 13:04:45 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:06.393 13:04:45 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:06.393 nvmf hotplug test: fio successful as expected 00:20:06.393 13:04:45 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.393 13:04:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.393 13:04:45 -- common/autotest_common.sh@10 -- # set +x 00:20:06.393 13:04:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.393 13:04:45 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:06.393 13:04:45 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:06.393 13:04:45 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:06.393 13:04:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:06.393 13:04:45 -- nvmf/common.sh@116 -- # sync 00:20:06.393 13:04:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:06.393 13:04:45 -- nvmf/common.sh@119 -- # set +e 00:20:06.393 13:04:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:06.393 13:04:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:06.393 rmmod nvme_tcp 00:20:06.393 rmmod nvme_fabrics 00:20:06.393 rmmod nvme_keyring 00:20:06.393 13:04:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:06.393 13:04:45 -- nvmf/common.sh@123 -- # set -e 00:20:06.393 13:04:45 -- nvmf/common.sh@124 -- # return 0 00:20:06.393 13:04:45 -- nvmf/common.sh@477 -- # '[' -n 91392 ']' 00:20:06.393 13:04:45 -- nvmf/common.sh@478 -- # killprocess 91392 00:20:06.393 13:04:45 -- common/autotest_common.sh@936 -- # '[' -z 91392 ']' 00:20:06.393 13:04:45 -- common/autotest_common.sh@940 -- # kill -0 91392 00:20:06.393 13:04:45 -- common/autotest_common.sh@941 -- # uname 00:20:06.393 13:04:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.393 13:04:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91392 00:20:06.393 13:04:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:06.393 13:04:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:06.393 13:04:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91392' 00:20:06.393 killing process with pid 91392 00:20:06.393 13:04:45 -- common/autotest_common.sh@955 -- # kill 91392 00:20:06.393 13:04:45 -- common/autotest_common.sh@960 -- # wait 91392 00:20:06.393 13:04:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:06.393 13:04:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:06.393 13:04:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:06.393 13:04:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.393 13:04:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:06.393 13:04:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.393 13:04:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.393 13:04:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.393 13:04:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:06.393 00:20:06.393 real 1m4.599s 00:20:06.393 user 4m5.998s 00:20:06.393 sys 0m8.833s 00:20:06.393 13:04:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:06.393 13:04:45 -- common/autotest_common.sh@10 -- # set +x 00:20:06.393 ************************************ 00:20:06.393 END TEST nvmf_initiator_timeout 00:20:06.393 ************************************ 00:20:06.393 13:04:45 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:06.393 13:04:45 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:06.393 13:04:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:06.393 13:04:45 -- common/autotest_common.sh@10 -- # set +x 00:20:06.393 13:04:45 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:06.393 13:04:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.393 13:04:45 -- common/autotest_common.sh@10 -- # set +x 00:20:06.393 13:04:45 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:06.393 13:04:45 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:06.393 13:04:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:06.393 13:04:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:06.393 13:04:45 -- common/autotest_common.sh@10 -- # set +x 00:20:06.393 ************************************ 00:20:06.393 START TEST nvmf_multicontroller 00:20:06.393 ************************************ 00:20:06.393 13:04:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:06.393 * Looking for test storage... 00:20:06.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.393 13:04:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:06.393 13:04:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:06.393 13:04:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:06.393 13:04:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:06.393 13:04:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:06.393 13:04:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:06.393 13:04:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:06.393 13:04:45 -- scripts/common.sh@335 -- # IFS=.-: 00:20:06.393 13:04:45 -- scripts/common.sh@335 -- # read -ra ver1 00:20:06.393 13:04:45 -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.393 13:04:45 -- scripts/common.sh@336 -- # read -ra ver2 00:20:06.393 13:04:45 -- scripts/common.sh@337 -- # local 'op=<' 00:20:06.393 13:04:45 -- scripts/common.sh@339 -- # ver1_l=2 00:20:06.393 13:04:45 -- scripts/common.sh@340 -- # ver2_l=1 00:20:06.393 13:04:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:06.393 13:04:45 -- scripts/common.sh@343 -- # case "$op" in 00:20:06.393 13:04:45 -- scripts/common.sh@344 -- # : 1 00:20:06.393 13:04:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:06.393 13:04:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.393 13:04:45 -- scripts/common.sh@364 -- # decimal 1 00:20:06.393 13:04:45 -- scripts/common.sh@352 -- # local d=1 00:20:06.393 13:04:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.393 13:04:45 -- scripts/common.sh@354 -- # echo 1 00:20:06.393 13:04:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:06.393 13:04:45 -- scripts/common.sh@365 -- # decimal 2 00:20:06.393 13:04:45 -- scripts/common.sh@352 -- # local d=2 00:20:06.393 13:04:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.393 13:04:45 -- scripts/common.sh@354 -- # echo 2 00:20:06.393 13:04:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:06.393 13:04:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:06.393 13:04:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:06.393 13:04:45 -- scripts/common.sh@367 -- # return 0 00:20:06.393 13:04:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.393 13:04:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:06.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.393 --rc genhtml_branch_coverage=1 00:20:06.393 --rc genhtml_function_coverage=1 00:20:06.393 --rc genhtml_legend=1 00:20:06.393 --rc geninfo_all_blocks=1 00:20:06.393 --rc geninfo_unexecuted_blocks=1 00:20:06.393 00:20:06.393 ' 00:20:06.393 13:04:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:06.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.393 --rc genhtml_branch_coverage=1 00:20:06.393 --rc genhtml_function_coverage=1 00:20:06.393 --rc genhtml_legend=1 00:20:06.393 --rc geninfo_all_blocks=1 00:20:06.393 --rc geninfo_unexecuted_blocks=1 00:20:06.393 00:20:06.393 ' 00:20:06.393 13:04:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:06.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.393 --rc genhtml_branch_coverage=1 00:20:06.393 --rc genhtml_function_coverage=1 00:20:06.393 --rc genhtml_legend=1 00:20:06.393 --rc geninfo_all_blocks=1 00:20:06.393 --rc geninfo_unexecuted_blocks=1 00:20:06.393 00:20:06.393 ' 00:20:06.393 13:04:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:06.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.394 --rc genhtml_branch_coverage=1 00:20:06.394 --rc genhtml_function_coverage=1 00:20:06.394 --rc genhtml_legend=1 00:20:06.394 --rc geninfo_all_blocks=1 00:20:06.394 --rc geninfo_unexecuted_blocks=1 00:20:06.394 00:20:06.394 ' 00:20:06.394 13:04:45 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.394 13:04:45 -- nvmf/common.sh@7 -- # uname -s 00:20:06.394 13:04:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.394 13:04:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.394 13:04:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.394 13:04:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.394 13:04:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.394 13:04:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.394 13:04:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.394 13:04:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.394 13:04:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.394 13:04:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.394 13:04:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:06.394 13:04:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:06.394 13:04:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.394 13:04:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.394 13:04:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.394 13:04:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.394 13:04:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.394 13:04:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.394 13:04:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.394 13:04:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.394 13:04:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.394 13:04:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.394 13:04:45 -- paths/export.sh@5 -- # export PATH 00:20:06.394 13:04:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.394 13:04:45 -- nvmf/common.sh@46 -- # : 0 00:20:06.394 13:04:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:06.394 13:04:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:06.394 13:04:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:06.394 13:04:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.394 13:04:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.394 13:04:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:06.394 13:04:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:06.394 13:04:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:06.394 13:04:45 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:06.394 13:04:45 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:06.394 13:04:45 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:06.394 13:04:45 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:06.394 13:04:45 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.394 13:04:45 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:06.394 13:04:45 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:06.394 13:04:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:06.394 13:04:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.394 13:04:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:06.394 13:04:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:06.394 13:04:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:06.394 13:04:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.394 13:04:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.394 13:04:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.394 13:04:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:06.394 13:04:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:06.394 13:04:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:06.394 13:04:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:06.394 13:04:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:06.394 13:04:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:06.394 13:04:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.394 13:04:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.394 13:04:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:06.394 13:04:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:06.394 13:04:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.394 13:04:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.394 13:04:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.394 13:04:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.394 13:04:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.394 13:04:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.394 13:04:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.394 13:04:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.394 13:04:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:06.394 13:04:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:06.394 Cannot find device "nvmf_tgt_br" 00:20:06.394 13:04:45 -- nvmf/common.sh@154 -- # true 00:20:06.394 13:04:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.394 Cannot find device "nvmf_tgt_br2" 00:20:06.394 13:04:45 -- nvmf/common.sh@155 -- # true 00:20:06.394 13:04:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:06.394 13:04:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:06.394 Cannot find device "nvmf_tgt_br" 00:20:06.394 13:04:45 -- nvmf/common.sh@157 -- # true 00:20:06.394 13:04:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:06.394 Cannot find device "nvmf_tgt_br2" 00:20:06.394 13:04:45 -- nvmf/common.sh@158 -- # true 00:20:06.394 13:04:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:06.394 13:04:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:06.394 13:04:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.394 13:04:45 -- nvmf/common.sh@161 -- # true 00:20:06.394 13:04:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.394 13:04:45 -- nvmf/common.sh@162 -- # true 00:20:06.394 13:04:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.394 13:04:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.394 13:04:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.394 13:04:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.394 13:04:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.394 13:04:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.394 13:04:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.394 13:04:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:06.394 13:04:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:06.394 13:04:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:06.394 13:04:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:06.394 13:04:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:06.394 13:04:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:06.394 13:04:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.394 13:04:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.394 13:04:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.394 13:04:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:06.394 13:04:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:06.394 13:04:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.394 13:04:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.394 13:04:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.394 13:04:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.394 13:04:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.394 13:04:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:06.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:20:06.394 00:20:06.394 --- 10.0.0.2 ping statistics --- 00:20:06.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.394 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:06.394 13:04:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:06.394 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.394 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:06.394 00:20:06.394 --- 10.0.0.3 ping statistics --- 00:20:06.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.394 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:06.394 13:04:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:06.395 00:20:06.395 --- 10.0.0.1 ping statistics --- 00:20:06.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.395 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:06.395 13:04:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.395 13:04:46 -- nvmf/common.sh@421 -- # return 0 00:20:06.395 13:04:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:06.395 13:04:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.395 13:04:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:06.395 13:04:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:06.395 13:04:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.395 13:04:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:06.395 13:04:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:06.395 13:04:46 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:06.395 13:04:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:06.395 13:04:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.395 13:04:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.395 13:04:46 -- nvmf/common.sh@469 -- # nvmfpid=92329 00:20:06.395 13:04:46 -- nvmf/common.sh@470 -- # waitforlisten 92329 00:20:06.395 13:04:46 -- common/autotest_common.sh@829 -- # '[' -z 92329 ']' 00:20:06.395 13:04:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.395 13:04:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:06.395 13:04:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.395 13:04:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.395 13:04:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.395 13:04:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.395 [2024-12-13 13:04:46.147609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:06.395 [2024-12-13 13:04:46.147700] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.395 [2024-12-13 13:04:46.285549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:06.395 [2024-12-13 13:04:46.360559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:06.395 [2024-12-13 13:04:46.361000] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.395 [2024-12-13 13:04:46.361120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.395 [2024-12-13 13:04:46.361237] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.395 [2024-12-13 13:04:46.361475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.395 [2024-12-13 13:04:46.361994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.395 [2024-12-13 13:04:46.362003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.654 13:04:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.654 13:04:47 -- common/autotest_common.sh@862 -- # return 0 00:20:06.654 13:04:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:06.654 13:04:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 13:04:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.654 13:04:47 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 [2024-12-13 13:04:47.221612] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 Malloc0 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 [2024-12-13 13:04:47.291562] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 [2024-12-13 13:04:47.299503] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 Malloc1 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:06.654 13:04:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 13:04:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.654 13:04:47 -- host/multicontroller.sh@44 -- # bdevperf_pid=92381 00:20:06.654 13:04:47 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:06.654 13:04:47 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:06.654 13:04:47 -- host/multicontroller.sh@47 -- # waitforlisten 92381 /var/tmp/bdevperf.sock 00:20:06.654 13:04:47 -- common/autotest_common.sh@829 -- # '[' -z 92381 ']' 00:20:06.654 13:04:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.654 13:04:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.654 13:04:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.654 13:04:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.654 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:08.032 13:04:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.032 13:04:48 -- common/autotest_common.sh@862 -- # return 0 00:20:08.032 13:04:48 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:08.032 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.032 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.032 NVMe0n1 00:20:08.032 13:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.032 13:04:48 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:08.032 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.032 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.032 13:04:48 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:08.032 13:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.032 1 00:20:08.032 13:04:48 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:08.032 13:04:48 -- common/autotest_common.sh@650 -- # local es=0 00:20:08.032 13:04:48 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:08.032 13:04:48 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.032 13:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.032 13:04:48 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.032 13:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.032 13:04:48 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:08.032 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.032 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.032 2024/12/13 13:04:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:08.032 request: 00:20:08.032 { 00:20:08.032 "method": "bdev_nvme_attach_controller", 00:20:08.032 "params": { 00:20:08.032 "name": "NVMe0", 00:20:08.032 "trtype": "tcp", 00:20:08.032 "traddr": "10.0.0.2", 00:20:08.032 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:08.032 "hostaddr": "10.0.0.2", 00:20:08.032 "hostsvcid": "60000", 00:20:08.032 "adrfam": "ipv4", 00:20:08.032 "trsvcid": "4420", 00:20:08.032 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:08.032 } 00:20:08.032 } 00:20:08.032 Got JSON-RPC error response 00:20:08.032 GoRPCClient: error on JSON-RPC call 00:20:08.032 13:04:48 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.032 13:04:48 -- common/autotest_common.sh@653 -- # es=1 00:20:08.032 13:04:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.032 13:04:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.032 13:04:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.032 13:04:48 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:08.032 13:04:48 -- common/autotest_common.sh@650 -- # local es=0 00:20:08.032 13:04:48 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:08.032 13:04:48 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.032 13:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.032 13:04:48 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.032 13:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.032 13:04:48 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:08.032 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.032 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.032 2024/12/13 13:04:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:08.032 request: 00:20:08.032 { 00:20:08.032 "method": "bdev_nvme_attach_controller", 00:20:08.032 "params": { 00:20:08.032 "name": "NVMe0", 00:20:08.032 "trtype": "tcp", 00:20:08.032 "traddr": "10.0.0.2", 00:20:08.032 "hostaddr": "10.0.0.2", 00:20:08.032 "hostsvcid": "60000", 00:20:08.032 "adrfam": "ipv4", 00:20:08.032 "trsvcid": "4420", 00:20:08.032 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:08.032 } 00:20:08.032 } 00:20:08.032 Got JSON-RPC error response 00:20:08.032 GoRPCClient: error on JSON-RPC call 00:20:08.032 13:04:48 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.032 13:04:48 -- common/autotest_common.sh@653 -- # es=1 00:20:08.032 13:04:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.032 13:04:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.032 13:04:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.032 13:04:48 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:08.032 13:04:48 -- common/autotest_common.sh@650 -- # local es=0 00:20:08.032 13:04:48 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:08.032 13:04:48 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.032 13:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.032 13:04:48 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.032 13:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.032 13:04:48 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:08.032 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.032 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.032 2024/12/13 13:04:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:08.032 request: 00:20:08.032 { 00:20:08.032 "method": "bdev_nvme_attach_controller", 00:20:08.032 "params": { 00:20:08.032 "name": "NVMe0", 00:20:08.032 "trtype": "tcp", 00:20:08.032 "traddr": "10.0.0.2", 00:20:08.032 "hostaddr": "10.0.0.2", 00:20:08.032 "hostsvcid": "60000", 00:20:08.032 "adrfam": "ipv4", 00:20:08.032 "trsvcid": "4420", 00:20:08.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.032 "multipath": "disable" 00:20:08.032 } 00:20:08.032 } 00:20:08.032 Got JSON-RPC error response 00:20:08.032 GoRPCClient: error on JSON-RPC call 00:20:08.032 13:04:48 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.033 13:04:48 -- common/autotest_common.sh@653 -- # es=1 00:20:08.033 13:04:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.033 13:04:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.033 13:04:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.033 13:04:48 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:08.033 13:04:48 -- common/autotest_common.sh@650 -- # local es=0 00:20:08.033 13:04:48 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:08.033 13:04:48 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.033 13:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.033 13:04:48 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.033 13:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.033 13:04:48 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:08.033 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.033 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.033 2024/12/13 13:04:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:08.033 request: 00:20:08.033 { 00:20:08.033 "method": "bdev_nvme_attach_controller", 00:20:08.033 "params": { 00:20:08.033 "name": "NVMe0", 00:20:08.033 "trtype": "tcp", 00:20:08.033 "traddr": "10.0.0.2", 00:20:08.033 "hostaddr": "10.0.0.2", 00:20:08.033 "hostsvcid": "60000", 00:20:08.033 "adrfam": "ipv4", 00:20:08.033 "trsvcid": "4420", 00:20:08.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.033 "multipath": "failover" 00:20:08.033 } 00:20:08.033 } 00:20:08.033 Got JSON-RPC error response 00:20:08.033 GoRPCClient: error on JSON-RPC call 00:20:08.033 13:04:48 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.033 13:04:48 -- common/autotest_common.sh@653 -- # es=1 00:20:08.033 13:04:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.033 13:04:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.033 13:04:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.033 13:04:48 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:08.033 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.033 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.033 00:20:08.033 13:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.033 13:04:48 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:08.033 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.033 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.033 13:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.033 13:04:48 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:08.033 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.033 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.033 00:20:08.033 13:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.033 13:04:48 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:08.033 13:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.033 13:04:48 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:08.033 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:08.033 13:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.033 13:04:48 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:08.033 13:04:48 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.411 0 00:20:09.411 13:04:49 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:09.411 13:04:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.411 13:04:49 -- common/autotest_common.sh@10 -- # set +x 00:20:09.411 13:04:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.411 13:04:49 -- host/multicontroller.sh@100 -- # killprocess 92381 00:20:09.411 13:04:49 -- common/autotest_common.sh@936 -- # '[' -z 92381 ']' 00:20:09.411 13:04:49 -- common/autotest_common.sh@940 -- # kill -0 92381 00:20:09.411 13:04:49 -- common/autotest_common.sh@941 -- # uname 00:20:09.411 13:04:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.411 13:04:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92381 00:20:09.411 13:04:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:09.411 killing process with pid 92381 00:20:09.411 13:04:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:09.411 13:04:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92381' 00:20:09.411 13:04:49 -- common/autotest_common.sh@955 -- # kill 92381 00:20:09.411 13:04:49 -- common/autotest_common.sh@960 -- # wait 92381 00:20:09.411 13:04:50 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.411 13:04:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.411 13:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.411 13:04:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.411 13:04:50 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:09.411 13:04:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.411 13:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.411 13:04:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.411 13:04:50 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:09.411 13:04:50 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:09.411 13:04:50 -- common/autotest_common.sh@1607 -- # read -r file 00:20:09.411 13:04:50 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:09.411 13:04:50 -- common/autotest_common.sh@1606 -- # sort -u 00:20:09.411 13:04:50 -- common/autotest_common.sh@1608 -- # cat 00:20:09.411 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:09.411 [2024-12-13 13:04:47.411487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:09.411 [2024-12-13 13:04:47.411575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92381 ] 00:20:09.411 [2024-12-13 13:04:47.549031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.411 [2024-12-13 13:04:47.619834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.411 [2024-12-13 13:04:48.677693] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 77125fac-b835-493b-ad5a-268b1d60f659 already exists 00:20:09.411 [2024-12-13 13:04:48.677740] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:77125fac-b835-493b-ad5a-268b1d60f659 alias for bdev NVMe1n1 00:20:09.411 [2024-12-13 13:04:48.677802] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:09.411 Running I/O for 1 seconds... 00:20:09.411 00:20:09.411 Latency(us) 00:20:09.411 [2024-12-13T13:04:50.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.411 [2024-12-13T13:04:50.187Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:09.411 NVMe0n1 : 1.00 22664.46 88.53 0.00 0.00 5635.16 3098.07 14715.81 00:20:09.411 [2024-12-13T13:04:50.187Z] =================================================================================================================== 00:20:09.411 [2024-12-13T13:04:50.187Z] Total : 22664.46 88.53 0.00 0.00 5635.16 3098.07 14715.81 00:20:09.411 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.411 00:20:09.411 Latency(us) 00:20:09.411 [2024-12-13T13:04:50.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.411 [2024-12-13T13:04:50.187Z] =================================================================================================================== 00:20:09.411 [2024-12-13T13:04:50.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.411 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:09.411 13:04:50 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:09.411 13:04:50 -- common/autotest_common.sh@1607 -- # read -r file 00:20:09.411 13:04:50 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:09.411 13:04:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:09.411 13:04:50 -- nvmf/common.sh@116 -- # sync 00:20:09.670 13:04:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:09.670 13:04:50 -- nvmf/common.sh@119 -- # set +e 00:20:09.670 13:04:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:09.670 13:04:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:09.670 rmmod nvme_tcp 00:20:09.670 rmmod nvme_fabrics 00:20:09.670 rmmod nvme_keyring 00:20:09.670 13:04:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:09.670 13:04:50 -- nvmf/common.sh@123 -- # set -e 00:20:09.670 13:04:50 -- nvmf/common.sh@124 -- # return 0 00:20:09.670 13:04:50 -- nvmf/common.sh@477 -- # '[' -n 92329 ']' 00:20:09.670 13:04:50 -- nvmf/common.sh@478 -- # killprocess 92329 00:20:09.670 13:04:50 -- common/autotest_common.sh@936 -- # '[' -z 92329 ']' 00:20:09.670 13:04:50 -- common/autotest_common.sh@940 -- # kill -0 92329 00:20:09.670 13:04:50 -- common/autotest_common.sh@941 -- # uname 00:20:09.670 13:04:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.670 13:04:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92329 00:20:09.670 killing process with pid 92329 00:20:09.670 13:04:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:09.670 13:04:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:09.670 13:04:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92329' 00:20:09.670 13:04:50 -- common/autotest_common.sh@955 -- # kill 92329 00:20:09.670 13:04:50 -- common/autotest_common.sh@960 -- # wait 92329 00:20:09.930 13:04:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:09.930 13:04:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:09.930 13:04:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:09.930 13:04:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.930 13:04:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:09.930 13:04:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.930 13:04:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.930 13:04:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.930 13:04:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:09.930 00:20:09.930 real 0m5.054s 00:20:09.930 user 0m15.656s 00:20:09.930 sys 0m1.164s 00:20:09.930 13:04:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:09.930 13:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.930 ************************************ 00:20:09.930 END TEST nvmf_multicontroller 00:20:09.930 ************************************ 00:20:09.930 13:04:50 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:09.930 13:04:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:09.930 13:04:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:09.930 13:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.930 ************************************ 00:20:09.930 START TEST nvmf_aer 00:20:09.930 ************************************ 00:20:09.930 13:04:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:09.930 * Looking for test storage... 00:20:09.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:09.930 13:04:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:09.930 13:04:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:09.930 13:04:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:10.189 13:04:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:10.189 13:04:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:10.189 13:04:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:10.189 13:04:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:10.189 13:04:50 -- scripts/common.sh@335 -- # IFS=.-: 00:20:10.189 13:04:50 -- scripts/common.sh@335 -- # read -ra ver1 00:20:10.189 13:04:50 -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.189 13:04:50 -- scripts/common.sh@336 -- # read -ra ver2 00:20:10.189 13:04:50 -- scripts/common.sh@337 -- # local 'op=<' 00:20:10.189 13:04:50 -- scripts/common.sh@339 -- # ver1_l=2 00:20:10.189 13:04:50 -- scripts/common.sh@340 -- # ver2_l=1 00:20:10.189 13:04:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:10.189 13:04:50 -- scripts/common.sh@343 -- # case "$op" in 00:20:10.189 13:04:50 -- scripts/common.sh@344 -- # : 1 00:20:10.189 13:04:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:10.189 13:04:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.189 13:04:50 -- scripts/common.sh@364 -- # decimal 1 00:20:10.189 13:04:50 -- scripts/common.sh@352 -- # local d=1 00:20:10.189 13:04:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.189 13:04:50 -- scripts/common.sh@354 -- # echo 1 00:20:10.189 13:04:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:10.189 13:04:50 -- scripts/common.sh@365 -- # decimal 2 00:20:10.189 13:04:50 -- scripts/common.sh@352 -- # local d=2 00:20:10.189 13:04:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.189 13:04:50 -- scripts/common.sh@354 -- # echo 2 00:20:10.189 13:04:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:10.189 13:04:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:10.189 13:04:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:10.189 13:04:50 -- scripts/common.sh@367 -- # return 0 00:20:10.189 13:04:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.189 13:04:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.189 --rc genhtml_branch_coverage=1 00:20:10.189 --rc genhtml_function_coverage=1 00:20:10.189 --rc genhtml_legend=1 00:20:10.189 --rc geninfo_all_blocks=1 00:20:10.189 --rc geninfo_unexecuted_blocks=1 00:20:10.189 00:20:10.189 ' 00:20:10.189 13:04:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.189 --rc genhtml_branch_coverage=1 00:20:10.189 --rc genhtml_function_coverage=1 00:20:10.189 --rc genhtml_legend=1 00:20:10.189 --rc geninfo_all_blocks=1 00:20:10.189 --rc geninfo_unexecuted_blocks=1 00:20:10.189 00:20:10.189 ' 00:20:10.189 13:04:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.189 --rc genhtml_branch_coverage=1 00:20:10.189 --rc genhtml_function_coverage=1 00:20:10.189 --rc genhtml_legend=1 00:20:10.189 --rc geninfo_all_blocks=1 00:20:10.189 --rc geninfo_unexecuted_blocks=1 00:20:10.189 00:20:10.189 ' 00:20:10.189 13:04:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.189 --rc genhtml_branch_coverage=1 00:20:10.189 --rc genhtml_function_coverage=1 00:20:10.189 --rc genhtml_legend=1 00:20:10.189 --rc geninfo_all_blocks=1 00:20:10.189 --rc geninfo_unexecuted_blocks=1 00:20:10.189 00:20:10.189 ' 00:20:10.189 13:04:50 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:10.189 13:04:50 -- nvmf/common.sh@7 -- # uname -s 00:20:10.189 13:04:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.189 13:04:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.189 13:04:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.189 13:04:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.189 13:04:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.189 13:04:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.189 13:04:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.189 13:04:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.189 13:04:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.189 13:04:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.189 13:04:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:10.190 13:04:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:10.190 13:04:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.190 13:04:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.190 13:04:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:10.190 13:04:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:10.190 13:04:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.190 13:04:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.190 13:04:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.190 13:04:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.190 13:04:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.190 13:04:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.190 13:04:50 -- paths/export.sh@5 -- # export PATH 00:20:10.190 13:04:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.190 13:04:50 -- nvmf/common.sh@46 -- # : 0 00:20:10.190 13:04:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:10.190 13:04:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:10.190 13:04:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:10.190 13:04:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.190 13:04:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.190 13:04:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:10.190 13:04:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:10.190 13:04:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:10.190 13:04:50 -- host/aer.sh@11 -- # nvmftestinit 00:20:10.190 13:04:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:10.190 13:04:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.190 13:04:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:10.190 13:04:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:10.190 13:04:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:10.190 13:04:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.190 13:04:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.190 13:04:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.190 13:04:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:10.190 13:04:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:10.190 13:04:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:10.190 13:04:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:10.190 13:04:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:10.190 13:04:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:10.190 13:04:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.190 13:04:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.190 13:04:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:10.190 13:04:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:10.190 13:04:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:10.190 13:04:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:10.190 13:04:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:10.190 13:04:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.190 13:04:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:10.190 13:04:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:10.190 13:04:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:10.190 13:04:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:10.190 13:04:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:10.190 13:04:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:10.190 Cannot find device "nvmf_tgt_br" 00:20:10.190 13:04:50 -- nvmf/common.sh@154 -- # true 00:20:10.190 13:04:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:10.190 Cannot find device "nvmf_tgt_br2" 00:20:10.190 13:04:50 -- nvmf/common.sh@155 -- # true 00:20:10.190 13:04:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:10.190 13:04:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:10.190 Cannot find device "nvmf_tgt_br" 00:20:10.190 13:04:50 -- nvmf/common.sh@157 -- # true 00:20:10.190 13:04:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:10.190 Cannot find device "nvmf_tgt_br2" 00:20:10.190 13:04:50 -- nvmf/common.sh@158 -- # true 00:20:10.190 13:04:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:10.190 13:04:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:10.454 13:04:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.454 13:04:50 -- nvmf/common.sh@161 -- # true 00:20:10.454 13:04:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.454 13:04:50 -- nvmf/common.sh@162 -- # true 00:20:10.454 13:04:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:10.454 13:04:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:10.454 13:04:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:10.454 13:04:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:10.454 13:04:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:10.454 13:04:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:10.454 13:04:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:10.454 13:04:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:10.454 13:04:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:10.454 13:04:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:10.454 13:04:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:10.454 13:04:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:10.454 13:04:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:10.454 13:04:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:10.454 13:04:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:10.454 13:04:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:10.454 13:04:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:10.454 13:04:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:10.454 13:04:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:10.454 13:04:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:10.454 13:04:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:10.454 13:04:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:10.454 13:04:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:10.454 13:04:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:10.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:20:10.454 00:20:10.454 --- 10.0.0.2 ping statistics --- 00:20:10.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.454 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:10.454 13:04:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:10.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:10.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:10.454 00:20:10.454 --- 10.0.0.3 ping statistics --- 00:20:10.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.454 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:10.454 13:04:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:10.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:10.454 00:20:10.454 --- 10.0.0.1 ping statistics --- 00:20:10.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.454 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:10.454 13:04:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.454 13:04:51 -- nvmf/common.sh@421 -- # return 0 00:20:10.454 13:04:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:10.454 13:04:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.454 13:04:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:10.454 13:04:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:10.454 13:04:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.454 13:04:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:10.454 13:04:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:10.454 13:04:51 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:10.454 13:04:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:10.454 13:04:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.454 13:04:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.454 13:04:51 -- nvmf/common.sh@469 -- # nvmfpid=92642 00:20:10.454 13:04:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:10.454 13:04:51 -- nvmf/common.sh@470 -- # waitforlisten 92642 00:20:10.454 13:04:51 -- common/autotest_common.sh@829 -- # '[' -z 92642 ']' 00:20:10.454 13:04:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.454 13:04:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.455 13:04:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.455 13:04:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.455 13:04:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.719 [2024-12-13 13:04:51.231006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:10.719 [2024-12-13 13:04:51.231121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.719 [2024-12-13 13:04:51.369261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.719 [2024-12-13 13:04:51.428825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:10.719 [2024-12-13 13:04:51.428993] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.719 [2024-12-13 13:04:51.429005] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.719 [2024-12-13 13:04:51.429012] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.719 [2024-12-13 13:04:51.429510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.719 [2024-12-13 13:04:51.429667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.719 [2024-12-13 13:04:51.429861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.719 [2024-12-13 13:04:51.429867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.655 13:04:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.655 13:04:52 -- common/autotest_common.sh@862 -- # return 0 00:20:11.655 13:04:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:11.655 13:04:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.655 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.655 13:04:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.655 13:04:52 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.655 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.655 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.655 [2024-12-13 13:04:52.334523] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.655 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.655 13:04:52 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:11.655 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.655 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.655 Malloc0 00:20:11.655 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.655 13:04:52 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:11.655 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.655 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.655 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.655 13:04:52 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.655 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.655 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.655 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.655 13:04:52 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.655 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.655 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.655 [2024-12-13 13:04:52.402275] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.655 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.655 13:04:52 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:11.655 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.655 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.655 [2024-12-13 13:04:52.410019] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:11.655 [ 00:20:11.655 { 00:20:11.655 "allow_any_host": true, 00:20:11.655 "hosts": [], 00:20:11.655 "listen_addresses": [], 00:20:11.655 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.655 "subtype": "Discovery" 00:20:11.655 }, 00:20:11.655 { 00:20:11.655 "allow_any_host": true, 00:20:11.655 "hosts": [], 00:20:11.655 "listen_addresses": [ 00:20:11.655 { 00:20:11.655 "adrfam": "IPv4", 00:20:11.655 "traddr": "10.0.0.2", 00:20:11.655 "transport": "TCP", 00:20:11.655 "trsvcid": "4420", 00:20:11.655 "trtype": "TCP" 00:20:11.655 } 00:20:11.655 ], 00:20:11.655 "max_cntlid": 65519, 00:20:11.655 "max_namespaces": 2, 00:20:11.655 "min_cntlid": 1, 00:20:11.655 "model_number": "SPDK bdev Controller", 00:20:11.655 "namespaces": [ 00:20:11.655 { 00:20:11.655 "bdev_name": "Malloc0", 00:20:11.655 "name": "Malloc0", 00:20:11.655 "nguid": "49D3BCBFC1B64935B63A26468C32F965", 00:20:11.655 "nsid": 1, 00:20:11.655 "uuid": "49d3bcbf-c1b6-4935-b63a-26468c32f965" 00:20:11.655 } 00:20:11.655 ], 00:20:11.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.655 "serial_number": "SPDK00000000000001", 00:20:11.655 "subtype": "NVMe" 00:20:11.655 } 00:20:11.655 ] 00:20:11.655 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.655 13:04:52 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:11.655 13:04:52 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:11.655 13:04:52 -- host/aer.sh@33 -- # aerpid=92697 00:20:11.655 13:04:52 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:11.655 13:04:52 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:11.655 13:04:52 -- common/autotest_common.sh@1254 -- # local i=0 00:20:11.655 13:04:52 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.655 13:04:52 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:11.655 13:04:52 -- common/autotest_common.sh@1257 -- # i=1 00:20:11.655 13:04:52 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:11.914 13:04:52 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.914 13:04:52 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:11.914 13:04:52 -- common/autotest_common.sh@1257 -- # i=2 00:20:11.914 13:04:52 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:11.914 13:04:52 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.914 13:04:52 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.914 13:04:52 -- common/autotest_common.sh@1265 -- # return 0 00:20:11.914 13:04:52 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:11.914 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.914 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.914 Malloc1 00:20:11.914 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.914 13:04:52 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:11.914 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.914 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:12.173 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.173 13:04:52 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:12.173 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.173 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:12.173 [ 00:20:12.173 { 00:20:12.173 "allow_any_host": true, 00:20:12.173 "hosts": [], 00:20:12.173 "listen_addresses": [], 00:20:12.173 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:12.173 "subtype": "Discovery" 00:20:12.173 }, 00:20:12.173 { 00:20:12.173 "allow_any_host": true, 00:20:12.173 "hosts": [], 00:20:12.173 "listen_addresses": [ 00:20:12.173 { 00:20:12.173 "adrfam": "IPv4", 00:20:12.173 "traddr": "10.0.0.2", 00:20:12.173 "transport": "TCP", 00:20:12.173 "trsvcid": "4420", 00:20:12.173 "trtype": "TCP" 00:20:12.173 } 00:20:12.173 ], 00:20:12.173 "max_cntlid": 65519, 00:20:12.173 "max_namespaces": 2, 00:20:12.173 "min_cntlid": 1, 00:20:12.173 "model_number": "SPDK bdev Controller", 00:20:12.173 Asynchronous Event Request test 00:20:12.173 Attaching to 10.0.0.2 00:20:12.173 Attached to 10.0.0.2 00:20:12.173 Registering asynchronous event callbacks... 00:20:12.173 Starting namespace attribute notice tests for all controllers... 00:20:12.173 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:12.173 aer_cb - Changed Namespace 00:20:12.173 Cleaning up... 00:20:12.173 "namespaces": [ 00:20:12.173 { 00:20:12.173 "bdev_name": "Malloc0", 00:20:12.173 "name": "Malloc0", 00:20:12.173 "nguid": "49D3BCBFC1B64935B63A26468C32F965", 00:20:12.173 "nsid": 1, 00:20:12.173 "uuid": "49d3bcbf-c1b6-4935-b63a-26468c32f965" 00:20:12.173 }, 00:20:12.173 { 00:20:12.173 "bdev_name": "Malloc1", 00:20:12.173 "name": "Malloc1", 00:20:12.173 "nguid": "640E019BD53A46AC96E11F47455A81DE", 00:20:12.173 "nsid": 2, 00:20:12.173 "uuid": "640e019b-d53a-46ac-96e1-1f47455a81de" 00:20:12.173 } 00:20:12.173 ], 00:20:12.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.173 "serial_number": "SPDK00000000000001", 00:20:12.173 "subtype": "NVMe" 00:20:12.173 } 00:20:12.173 ] 00:20:12.173 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.173 13:04:52 -- host/aer.sh@43 -- # wait 92697 00:20:12.173 13:04:52 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:12.173 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.173 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:12.173 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.173 13:04:52 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:12.173 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.173 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:12.173 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.173 13:04:52 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.173 13:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.173 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:12.173 13:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.173 13:04:52 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:12.173 13:04:52 -- host/aer.sh@51 -- # nvmftestfini 00:20:12.173 13:04:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:12.173 13:04:52 -- nvmf/common.sh@116 -- # sync 00:20:12.173 13:04:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:12.173 13:04:52 -- nvmf/common.sh@119 -- # set +e 00:20:12.173 13:04:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:12.173 13:04:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:12.173 rmmod nvme_tcp 00:20:12.173 rmmod nvme_fabrics 00:20:12.173 rmmod nvme_keyring 00:20:12.173 13:04:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:12.173 13:04:52 -- nvmf/common.sh@123 -- # set -e 00:20:12.173 13:04:52 -- nvmf/common.sh@124 -- # return 0 00:20:12.173 13:04:52 -- nvmf/common.sh@477 -- # '[' -n 92642 ']' 00:20:12.173 13:04:52 -- nvmf/common.sh@478 -- # killprocess 92642 00:20:12.173 13:04:52 -- common/autotest_common.sh@936 -- # '[' -z 92642 ']' 00:20:12.174 13:04:52 -- common/autotest_common.sh@940 -- # kill -0 92642 00:20:12.174 13:04:52 -- common/autotest_common.sh@941 -- # uname 00:20:12.174 13:04:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.174 13:04:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92642 00:20:12.174 killing process with pid 92642 00:20:12.174 13:04:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:12.174 13:04:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:12.174 13:04:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92642' 00:20:12.174 13:04:52 -- common/autotest_common.sh@955 -- # kill 92642 00:20:12.174 [2024-12-13 13:04:52.940400] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:12.174 13:04:52 -- common/autotest_common.sh@960 -- # wait 92642 00:20:12.432 13:04:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:12.432 13:04:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:12.432 13:04:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:12.432 13:04:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.432 13:04:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:12.432 13:04:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.433 13:04:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.433 13:04:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.433 13:04:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:12.433 00:20:12.433 real 0m2.541s 00:20:12.433 user 0m7.194s 00:20:12.433 sys 0m0.684s 00:20:12.433 13:04:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:12.433 13:04:53 -- common/autotest_common.sh@10 -- # set +x 00:20:12.433 ************************************ 00:20:12.433 END TEST nvmf_aer 00:20:12.433 ************************************ 00:20:12.433 13:04:53 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:12.433 13:04:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:12.433 13:04:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.433 13:04:53 -- common/autotest_common.sh@10 -- # set +x 00:20:12.700 ************************************ 00:20:12.700 START TEST nvmf_async_init 00:20:12.700 ************************************ 00:20:12.700 13:04:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:12.700 * Looking for test storage... 00:20:12.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:12.700 13:04:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:12.700 13:04:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:12.700 13:04:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:12.700 13:04:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:12.700 13:04:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:12.700 13:04:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:12.700 13:04:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:12.700 13:04:53 -- scripts/common.sh@335 -- # IFS=.-: 00:20:12.700 13:04:53 -- scripts/common.sh@335 -- # read -ra ver1 00:20:12.700 13:04:53 -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.700 13:04:53 -- scripts/common.sh@336 -- # read -ra ver2 00:20:12.700 13:04:53 -- scripts/common.sh@337 -- # local 'op=<' 00:20:12.700 13:04:53 -- scripts/common.sh@339 -- # ver1_l=2 00:20:12.700 13:04:53 -- scripts/common.sh@340 -- # ver2_l=1 00:20:12.700 13:04:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:12.700 13:04:53 -- scripts/common.sh@343 -- # case "$op" in 00:20:12.700 13:04:53 -- scripts/common.sh@344 -- # : 1 00:20:12.700 13:04:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:12.700 13:04:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.700 13:04:53 -- scripts/common.sh@364 -- # decimal 1 00:20:12.700 13:04:53 -- scripts/common.sh@352 -- # local d=1 00:20:12.700 13:04:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.700 13:04:53 -- scripts/common.sh@354 -- # echo 1 00:20:12.700 13:04:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:12.700 13:04:53 -- scripts/common.sh@365 -- # decimal 2 00:20:12.700 13:04:53 -- scripts/common.sh@352 -- # local d=2 00:20:12.700 13:04:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.700 13:04:53 -- scripts/common.sh@354 -- # echo 2 00:20:12.700 13:04:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:12.700 13:04:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:12.700 13:04:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:12.700 13:04:53 -- scripts/common.sh@367 -- # return 0 00:20:12.700 13:04:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.700 13:04:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:12.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.700 --rc genhtml_branch_coverage=1 00:20:12.700 --rc genhtml_function_coverage=1 00:20:12.700 --rc genhtml_legend=1 00:20:12.700 --rc geninfo_all_blocks=1 00:20:12.700 --rc geninfo_unexecuted_blocks=1 00:20:12.700 00:20:12.700 ' 00:20:12.700 13:04:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:12.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.700 --rc genhtml_branch_coverage=1 00:20:12.701 --rc genhtml_function_coverage=1 00:20:12.701 --rc genhtml_legend=1 00:20:12.701 --rc geninfo_all_blocks=1 00:20:12.701 --rc geninfo_unexecuted_blocks=1 00:20:12.701 00:20:12.701 ' 00:20:12.701 13:04:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:12.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.701 --rc genhtml_branch_coverage=1 00:20:12.701 --rc genhtml_function_coverage=1 00:20:12.701 --rc genhtml_legend=1 00:20:12.701 --rc geninfo_all_blocks=1 00:20:12.701 --rc geninfo_unexecuted_blocks=1 00:20:12.701 00:20:12.701 ' 00:20:12.701 13:04:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:12.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.701 --rc genhtml_branch_coverage=1 00:20:12.701 --rc genhtml_function_coverage=1 00:20:12.701 --rc genhtml_legend=1 00:20:12.701 --rc geninfo_all_blocks=1 00:20:12.701 --rc geninfo_unexecuted_blocks=1 00:20:12.701 00:20:12.701 ' 00:20:12.701 13:04:53 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.701 13:04:53 -- nvmf/common.sh@7 -- # uname -s 00:20:12.701 13:04:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.701 13:04:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.701 13:04:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.701 13:04:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.701 13:04:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.701 13:04:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.701 13:04:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.701 13:04:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.701 13:04:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.701 13:04:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.701 13:04:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:12.701 13:04:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:12.701 13:04:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.701 13:04:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.701 13:04:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:12.701 13:04:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.701 13:04:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.701 13:04:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.701 13:04:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.701 13:04:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.701 13:04:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.701 13:04:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.701 13:04:53 -- paths/export.sh@5 -- # export PATH 00:20:12.701 13:04:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.701 13:04:53 -- nvmf/common.sh@46 -- # : 0 00:20:12.701 13:04:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:12.701 13:04:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:12.701 13:04:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:12.701 13:04:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.701 13:04:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.701 13:04:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:12.701 13:04:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:12.701 13:04:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:12.701 13:04:53 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:12.701 13:04:53 -- host/async_init.sh@14 -- # null_block_size=512 00:20:12.701 13:04:53 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:12.701 13:04:53 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:12.701 13:04:53 -- host/async_init.sh@20 -- # uuidgen 00:20:12.701 13:04:53 -- host/async_init.sh@20 -- # tr -d - 00:20:12.701 13:04:53 -- host/async_init.sh@20 -- # nguid=fddc6f19844c4be290018a52d47a9dfc 00:20:12.701 13:04:53 -- host/async_init.sh@22 -- # nvmftestinit 00:20:12.701 13:04:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:12.701 13:04:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.701 13:04:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:12.701 13:04:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:12.701 13:04:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:12.701 13:04:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.701 13:04:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.701 13:04:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.701 13:04:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:12.701 13:04:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:12.701 13:04:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:12.701 13:04:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:12.701 13:04:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:12.701 13:04:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:12.701 13:04:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.701 13:04:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.701 13:04:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:12.701 13:04:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:12.701 13:04:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:12.701 13:04:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:12.701 13:04:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:12.701 13:04:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.701 13:04:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:12.701 13:04:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:12.701 13:04:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:12.701 13:04:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:12.701 13:04:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:12.701 13:04:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:12.959 Cannot find device "nvmf_tgt_br" 00:20:12.959 13:04:53 -- nvmf/common.sh@154 -- # true 00:20:12.959 13:04:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.959 Cannot find device "nvmf_tgt_br2" 00:20:12.959 13:04:53 -- nvmf/common.sh@155 -- # true 00:20:12.959 13:04:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:12.959 13:04:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:12.959 Cannot find device "nvmf_tgt_br" 00:20:12.959 13:04:53 -- nvmf/common.sh@157 -- # true 00:20:12.959 13:04:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:12.959 Cannot find device "nvmf_tgt_br2" 00:20:12.959 13:04:53 -- nvmf/common.sh@158 -- # true 00:20:12.959 13:04:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:12.959 13:04:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:12.959 13:04:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.959 13:04:53 -- nvmf/common.sh@161 -- # true 00:20:12.959 13:04:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.959 13:04:53 -- nvmf/common.sh@162 -- # true 00:20:12.959 13:04:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:12.959 13:04:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:12.959 13:04:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:12.959 13:04:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:12.960 13:04:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:12.960 13:04:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:12.960 13:04:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:12.960 13:04:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:12.960 13:04:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:12.960 13:04:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:12.960 13:04:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:12.960 13:04:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:13.218 13:04:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:13.218 13:04:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.218 13:04:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.218 13:04:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.218 13:04:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:13.218 13:04:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:13.218 13:04:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.218 13:04:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.218 13:04:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.218 13:04:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.218 13:04:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.218 13:04:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:13.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:20:13.218 00:20:13.218 --- 10.0.0.2 ping statistics --- 00:20:13.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.218 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:13.218 13:04:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:13.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:20:13.218 00:20:13.218 --- 10.0.0.3 ping statistics --- 00:20:13.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.218 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:13.218 13:04:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:13.218 00:20:13.218 --- 10.0.0.1 ping statistics --- 00:20:13.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.218 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:13.218 13:04:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.218 13:04:53 -- nvmf/common.sh@421 -- # return 0 00:20:13.218 13:04:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:13.218 13:04:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.218 13:04:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:13.218 13:04:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:13.218 13:04:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.218 13:04:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:13.218 13:04:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:13.218 13:04:53 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:13.218 13:04:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:13.218 13:04:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.218 13:04:53 -- common/autotest_common.sh@10 -- # set +x 00:20:13.218 13:04:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:13.218 13:04:53 -- nvmf/common.sh@469 -- # nvmfpid=92876 00:20:13.218 13:04:53 -- nvmf/common.sh@470 -- # waitforlisten 92876 00:20:13.218 13:04:53 -- common/autotest_common.sh@829 -- # '[' -z 92876 ']' 00:20:13.218 13:04:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.218 13:04:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.218 13:04:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.218 13:04:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.218 13:04:53 -- common/autotest_common.sh@10 -- # set +x 00:20:13.218 [2024-12-13 13:04:53.896378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:13.218 [2024-12-13 13:04:53.896500] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.477 [2024-12-13 13:04:54.025775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.477 [2024-12-13 13:04:54.082857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:13.477 [2024-12-13 13:04:54.083007] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.477 [2024-12-13 13:04:54.083021] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.477 [2024-12-13 13:04:54.083030] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.477 [2024-12-13 13:04:54.083066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.412 13:04:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.412 13:04:54 -- common/autotest_common.sh@862 -- # return 0 00:20:14.412 13:04:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:14.412 13:04:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.412 13:04:54 -- common/autotest_common.sh@10 -- # set +x 00:20:14.412 13:04:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.412 13:04:54 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:14.412 13:04:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.412 13:04:54 -- common/autotest_common.sh@10 -- # set +x 00:20:14.412 [2024-12-13 13:04:54.995977] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.412 13:04:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.412 13:04:54 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:14.412 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.412 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.412 null0 00:20:14.412 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.412 13:04:55 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:14.412 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.412 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.412 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.412 13:04:55 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:14.412 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.412 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.412 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.412 13:04:55 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fddc6f19844c4be290018a52d47a9dfc 00:20:14.412 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.412 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.412 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.412 13:04:55 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:14.412 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.412 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.413 [2024-12-13 13:04:55.044095] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.413 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.413 13:04:55 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:14.413 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.413 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.671 nvme0n1 00:20:14.671 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.671 13:04:55 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:14.671 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.671 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.671 [ 00:20:14.671 { 00:20:14.671 "aliases": [ 00:20:14.671 "fddc6f19-844c-4be2-9001-8a52d47a9dfc" 00:20:14.671 ], 00:20:14.671 "assigned_rate_limits": { 00:20:14.671 "r_mbytes_per_sec": 0, 00:20:14.671 "rw_ios_per_sec": 0, 00:20:14.671 "rw_mbytes_per_sec": 0, 00:20:14.671 "w_mbytes_per_sec": 0 00:20:14.671 }, 00:20:14.671 "block_size": 512, 00:20:14.671 "claimed": false, 00:20:14.671 "driver_specific": { 00:20:14.671 "mp_policy": "active_passive", 00:20:14.671 "nvme": [ 00:20:14.671 { 00:20:14.671 "ctrlr_data": { 00:20:14.671 "ana_reporting": false, 00:20:14.671 "cntlid": 1, 00:20:14.671 "firmware_revision": "24.01.1", 00:20:14.671 "model_number": "SPDK bdev Controller", 00:20:14.671 "multi_ctrlr": true, 00:20:14.671 "oacs": { 00:20:14.671 "firmware": 0, 00:20:14.671 "format": 0, 00:20:14.671 "ns_manage": 0, 00:20:14.671 "security": 0 00:20:14.671 }, 00:20:14.671 "serial_number": "00000000000000000000", 00:20:14.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.671 "vendor_id": "0x8086" 00:20:14.671 }, 00:20:14.671 "ns_data": { 00:20:14.671 "can_share": true, 00:20:14.671 "id": 1 00:20:14.671 }, 00:20:14.671 "trid": { 00:20:14.671 "adrfam": "IPv4", 00:20:14.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.671 "traddr": "10.0.0.2", 00:20:14.671 "trsvcid": "4420", 00:20:14.671 "trtype": "TCP" 00:20:14.671 }, 00:20:14.671 "vs": { 00:20:14.671 "nvme_version": "1.3" 00:20:14.671 } 00:20:14.671 } 00:20:14.671 ] 00:20:14.671 }, 00:20:14.671 "name": "nvme0n1", 00:20:14.671 "num_blocks": 2097152, 00:20:14.671 "product_name": "NVMe disk", 00:20:14.671 "supported_io_types": { 00:20:14.671 "abort": true, 00:20:14.671 "compare": true, 00:20:14.671 "compare_and_write": true, 00:20:14.671 "flush": true, 00:20:14.671 "nvme_admin": true, 00:20:14.671 "nvme_io": true, 00:20:14.671 "read": true, 00:20:14.671 "reset": true, 00:20:14.671 "unmap": false, 00:20:14.671 "write": true, 00:20:14.671 "write_zeroes": true 00:20:14.671 }, 00:20:14.671 "uuid": "fddc6f19-844c-4be2-9001-8a52d47a9dfc", 00:20:14.671 "zoned": false 00:20:14.671 } 00:20:14.671 ] 00:20:14.671 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.671 13:04:55 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:14.671 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.671 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.671 [2024-12-13 13:04:55.312052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:14.671 [2024-12-13 13:04:55.312154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e501c0 (9): Bad file descriptor 00:20:14.671 [2024-12-13 13:04:55.444044] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:14.930 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.930 13:04:55 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:14.930 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.930 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.930 [ 00:20:14.930 { 00:20:14.930 "aliases": [ 00:20:14.930 "fddc6f19-844c-4be2-9001-8a52d47a9dfc" 00:20:14.930 ], 00:20:14.930 "assigned_rate_limits": { 00:20:14.930 "r_mbytes_per_sec": 0, 00:20:14.930 "rw_ios_per_sec": 0, 00:20:14.930 "rw_mbytes_per_sec": 0, 00:20:14.930 "w_mbytes_per_sec": 0 00:20:14.930 }, 00:20:14.930 "block_size": 512, 00:20:14.930 "claimed": false, 00:20:14.930 "driver_specific": { 00:20:14.930 "mp_policy": "active_passive", 00:20:14.930 "nvme": [ 00:20:14.930 { 00:20:14.930 "ctrlr_data": { 00:20:14.930 "ana_reporting": false, 00:20:14.930 "cntlid": 2, 00:20:14.930 "firmware_revision": "24.01.1", 00:20:14.930 "model_number": "SPDK bdev Controller", 00:20:14.930 "multi_ctrlr": true, 00:20:14.930 "oacs": { 00:20:14.930 "firmware": 0, 00:20:14.930 "format": 0, 00:20:14.930 "ns_manage": 0, 00:20:14.930 "security": 0 00:20:14.930 }, 00:20:14.930 "serial_number": "00000000000000000000", 00:20:14.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.930 "vendor_id": "0x8086" 00:20:14.930 }, 00:20:14.930 "ns_data": { 00:20:14.930 "can_share": true, 00:20:14.930 "id": 1 00:20:14.930 }, 00:20:14.930 "trid": { 00:20:14.930 "adrfam": "IPv4", 00:20:14.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.930 "traddr": "10.0.0.2", 00:20:14.930 "trsvcid": "4420", 00:20:14.930 "trtype": "TCP" 00:20:14.930 }, 00:20:14.930 "vs": { 00:20:14.930 "nvme_version": "1.3" 00:20:14.930 } 00:20:14.930 } 00:20:14.930 ] 00:20:14.930 }, 00:20:14.930 "name": "nvme0n1", 00:20:14.930 "num_blocks": 2097152, 00:20:14.930 "product_name": "NVMe disk", 00:20:14.930 "supported_io_types": { 00:20:14.930 "abort": true, 00:20:14.930 "compare": true, 00:20:14.930 "compare_and_write": true, 00:20:14.930 "flush": true, 00:20:14.930 "nvme_admin": true, 00:20:14.930 "nvme_io": true, 00:20:14.930 "read": true, 00:20:14.930 "reset": true, 00:20:14.930 "unmap": false, 00:20:14.930 "write": true, 00:20:14.930 "write_zeroes": true 00:20:14.930 }, 00:20:14.930 "uuid": "fddc6f19-844c-4be2-9001-8a52d47a9dfc", 00:20:14.930 "zoned": false 00:20:14.930 } 00:20:14.930 ] 00:20:14.930 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.930 13:04:55 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.930 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.930 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.930 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.930 13:04:55 -- host/async_init.sh@53 -- # mktemp 00:20:14.930 13:04:55 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.awM9bs2LdJ 00:20:14.930 13:04:55 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:14.930 13:04:55 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.awM9bs2LdJ 00:20:14.930 13:04:55 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:14.930 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.930 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.930 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.930 13:04:55 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:14.930 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.930 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.930 [2024-12-13 13:04:55.508221] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.930 [2024-12-13 13:04:55.508378] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:14.930 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.930 13:04:55 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.awM9bs2LdJ 00:20:14.930 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.931 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.931 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.931 13:04:55 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.awM9bs2LdJ 00:20:14.931 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.931 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.931 [2024-12-13 13:04:55.524201] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.931 nvme0n1 00:20:14.931 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.931 13:04:55 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:14.931 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.931 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.931 [ 00:20:14.931 { 00:20:14.931 "aliases": [ 00:20:14.931 "fddc6f19-844c-4be2-9001-8a52d47a9dfc" 00:20:14.931 ], 00:20:14.931 "assigned_rate_limits": { 00:20:14.931 "r_mbytes_per_sec": 0, 00:20:14.931 "rw_ios_per_sec": 0, 00:20:14.931 "rw_mbytes_per_sec": 0, 00:20:14.931 "w_mbytes_per_sec": 0 00:20:14.931 }, 00:20:14.931 "block_size": 512, 00:20:14.931 "claimed": false, 00:20:14.931 "driver_specific": { 00:20:14.931 "mp_policy": "active_passive", 00:20:14.931 "nvme": [ 00:20:14.931 { 00:20:14.931 "ctrlr_data": { 00:20:14.931 "ana_reporting": false, 00:20:14.931 "cntlid": 3, 00:20:14.931 "firmware_revision": "24.01.1", 00:20:14.931 "model_number": "SPDK bdev Controller", 00:20:14.931 "multi_ctrlr": true, 00:20:14.931 "oacs": { 00:20:14.931 "firmware": 0, 00:20:14.931 "format": 0, 00:20:14.931 "ns_manage": 0, 00:20:14.931 "security": 0 00:20:14.931 }, 00:20:14.931 "serial_number": "00000000000000000000", 00:20:14.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.931 "vendor_id": "0x8086" 00:20:14.931 }, 00:20:14.931 "ns_data": { 00:20:14.931 "can_share": true, 00:20:14.931 "id": 1 00:20:14.931 }, 00:20:14.931 "trid": { 00:20:14.931 "adrfam": "IPv4", 00:20:14.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.931 "traddr": "10.0.0.2", 00:20:14.931 "trsvcid": "4421", 00:20:14.931 "trtype": "TCP" 00:20:14.931 }, 00:20:14.931 "vs": { 00:20:14.931 "nvme_version": "1.3" 00:20:14.931 } 00:20:14.931 } 00:20:14.931 ] 00:20:14.931 }, 00:20:14.931 "name": "nvme0n1", 00:20:14.931 "num_blocks": 2097152, 00:20:14.931 "product_name": "NVMe disk", 00:20:14.931 "supported_io_types": { 00:20:14.931 "abort": true, 00:20:14.931 "compare": true, 00:20:14.931 "compare_and_write": true, 00:20:14.931 "flush": true, 00:20:14.931 "nvme_admin": true, 00:20:14.931 "nvme_io": true, 00:20:14.931 "read": true, 00:20:14.931 "reset": true, 00:20:14.931 "unmap": false, 00:20:14.931 "write": true, 00:20:14.931 "write_zeroes": true 00:20:14.931 }, 00:20:14.931 "uuid": "fddc6f19-844c-4be2-9001-8a52d47a9dfc", 00:20:14.931 "zoned": false 00:20:14.931 } 00:20:14.931 ] 00:20:14.931 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.931 13:04:55 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.931 13:04:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.931 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.931 13:04:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.931 13:04:55 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.awM9bs2LdJ 00:20:14.931 13:04:55 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:14.931 13:04:55 -- host/async_init.sh@78 -- # nvmftestfini 00:20:14.931 13:04:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:14.931 13:04:55 -- nvmf/common.sh@116 -- # sync 00:20:14.931 13:04:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:14.931 13:04:55 -- nvmf/common.sh@119 -- # set +e 00:20:14.931 13:04:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:14.931 13:04:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:14.931 rmmod nvme_tcp 00:20:15.190 rmmod nvme_fabrics 00:20:15.190 rmmod nvme_keyring 00:20:15.190 13:04:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:15.190 13:04:55 -- nvmf/common.sh@123 -- # set -e 00:20:15.190 13:04:55 -- nvmf/common.sh@124 -- # return 0 00:20:15.190 13:04:55 -- nvmf/common.sh@477 -- # '[' -n 92876 ']' 00:20:15.190 13:04:55 -- nvmf/common.sh@478 -- # killprocess 92876 00:20:15.190 13:04:55 -- common/autotest_common.sh@936 -- # '[' -z 92876 ']' 00:20:15.190 13:04:55 -- common/autotest_common.sh@940 -- # kill -0 92876 00:20:15.190 13:04:55 -- common/autotest_common.sh@941 -- # uname 00:20:15.190 13:04:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.190 13:04:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92876 00:20:15.190 13:04:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:15.190 killing process with pid 92876 00:20:15.190 13:04:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:15.190 13:04:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92876' 00:20:15.190 13:04:55 -- common/autotest_common.sh@955 -- # kill 92876 00:20:15.190 13:04:55 -- common/autotest_common.sh@960 -- # wait 92876 00:20:15.190 13:04:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:15.190 13:04:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:15.190 13:04:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:15.190 13:04:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.190 13:04:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:15.190 13:04:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.190 13:04:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.190 13:04:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.449 13:04:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:15.449 00:20:15.449 real 0m2.775s 00:20:15.449 user 0m2.600s 00:20:15.449 sys 0m0.631s 00:20:15.449 13:04:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:15.449 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:15.449 ************************************ 00:20:15.449 END TEST nvmf_async_init 00:20:15.449 ************************************ 00:20:15.449 13:04:56 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:15.449 13:04:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:15.449 13:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.449 13:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:15.449 ************************************ 00:20:15.449 START TEST dma 00:20:15.449 ************************************ 00:20:15.449 13:04:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:15.449 * Looking for test storage... 00:20:15.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.449 13:04:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:15.449 13:04:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:15.449 13:04:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:15.449 13:04:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:15.449 13:04:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:15.449 13:04:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:15.449 13:04:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:15.449 13:04:56 -- scripts/common.sh@335 -- # IFS=.-: 00:20:15.449 13:04:56 -- scripts/common.sh@335 -- # read -ra ver1 00:20:15.449 13:04:56 -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.449 13:04:56 -- scripts/common.sh@336 -- # read -ra ver2 00:20:15.449 13:04:56 -- scripts/common.sh@337 -- # local 'op=<' 00:20:15.449 13:04:56 -- scripts/common.sh@339 -- # ver1_l=2 00:20:15.449 13:04:56 -- scripts/common.sh@340 -- # ver2_l=1 00:20:15.449 13:04:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:15.449 13:04:56 -- scripts/common.sh@343 -- # case "$op" in 00:20:15.449 13:04:56 -- scripts/common.sh@344 -- # : 1 00:20:15.449 13:04:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:15.449 13:04:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.449 13:04:56 -- scripts/common.sh@364 -- # decimal 1 00:20:15.449 13:04:56 -- scripts/common.sh@352 -- # local d=1 00:20:15.449 13:04:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.449 13:04:56 -- scripts/common.sh@354 -- # echo 1 00:20:15.449 13:04:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:15.449 13:04:56 -- scripts/common.sh@365 -- # decimal 2 00:20:15.449 13:04:56 -- scripts/common.sh@352 -- # local d=2 00:20:15.449 13:04:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.449 13:04:56 -- scripts/common.sh@354 -- # echo 2 00:20:15.449 13:04:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:15.449 13:04:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:15.449 13:04:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:15.449 13:04:56 -- scripts/common.sh@367 -- # return 0 00:20:15.449 13:04:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.449 13:04:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:15.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.449 --rc genhtml_branch_coverage=1 00:20:15.449 --rc genhtml_function_coverage=1 00:20:15.449 --rc genhtml_legend=1 00:20:15.449 --rc geninfo_all_blocks=1 00:20:15.449 --rc geninfo_unexecuted_blocks=1 00:20:15.449 00:20:15.449 ' 00:20:15.449 13:04:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:15.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.449 --rc genhtml_branch_coverage=1 00:20:15.449 --rc genhtml_function_coverage=1 00:20:15.449 --rc genhtml_legend=1 00:20:15.449 --rc geninfo_all_blocks=1 00:20:15.449 --rc geninfo_unexecuted_blocks=1 00:20:15.449 00:20:15.449 ' 00:20:15.449 13:04:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:15.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.449 --rc genhtml_branch_coverage=1 00:20:15.449 --rc genhtml_function_coverage=1 00:20:15.449 --rc genhtml_legend=1 00:20:15.449 --rc geninfo_all_blocks=1 00:20:15.449 --rc geninfo_unexecuted_blocks=1 00:20:15.449 00:20:15.449 ' 00:20:15.449 13:04:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:15.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.449 --rc genhtml_branch_coverage=1 00:20:15.449 --rc genhtml_function_coverage=1 00:20:15.449 --rc genhtml_legend=1 00:20:15.449 --rc geninfo_all_blocks=1 00:20:15.449 --rc geninfo_unexecuted_blocks=1 00:20:15.449 00:20:15.449 ' 00:20:15.449 13:04:56 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.449 13:04:56 -- nvmf/common.sh@7 -- # uname -s 00:20:15.449 13:04:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.449 13:04:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.449 13:04:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.449 13:04:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.449 13:04:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.449 13:04:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.449 13:04:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.709 13:04:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.709 13:04:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.709 13:04:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.709 13:04:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:15.709 13:04:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:15.709 13:04:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.709 13:04:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.709 13:04:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.709 13:04:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.709 13:04:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.709 13:04:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.709 13:04:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.709 13:04:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.709 13:04:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.709 13:04:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.709 13:04:56 -- paths/export.sh@5 -- # export PATH 00:20:15.709 13:04:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.709 13:04:56 -- nvmf/common.sh@46 -- # : 0 00:20:15.709 13:04:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:15.709 13:04:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:15.709 13:04:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:15.709 13:04:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.709 13:04:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.709 13:04:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:15.709 13:04:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:15.709 13:04:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:15.709 13:04:56 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:15.709 13:04:56 -- host/dma.sh@13 -- # exit 0 00:20:15.709 00:20:15.709 real 0m0.196s 00:20:15.709 user 0m0.119s 00:20:15.709 sys 0m0.089s 00:20:15.709 13:04:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:15.709 13:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:15.709 ************************************ 00:20:15.709 END TEST dma 00:20:15.709 ************************************ 00:20:15.709 13:04:56 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:15.709 13:04:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:15.709 13:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.709 13:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:15.709 ************************************ 00:20:15.709 START TEST nvmf_identify 00:20:15.709 ************************************ 00:20:15.709 13:04:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:15.709 * Looking for test storage... 00:20:15.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.709 13:04:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:15.709 13:04:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:15.709 13:04:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:15.709 13:04:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:15.709 13:04:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:15.709 13:04:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:15.709 13:04:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:15.709 13:04:56 -- scripts/common.sh@335 -- # IFS=.-: 00:20:15.709 13:04:56 -- scripts/common.sh@335 -- # read -ra ver1 00:20:15.709 13:04:56 -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.709 13:04:56 -- scripts/common.sh@336 -- # read -ra ver2 00:20:15.709 13:04:56 -- scripts/common.sh@337 -- # local 'op=<' 00:20:15.709 13:04:56 -- scripts/common.sh@339 -- # ver1_l=2 00:20:15.709 13:04:56 -- scripts/common.sh@340 -- # ver2_l=1 00:20:15.709 13:04:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:15.709 13:04:56 -- scripts/common.sh@343 -- # case "$op" in 00:20:15.709 13:04:56 -- scripts/common.sh@344 -- # : 1 00:20:15.709 13:04:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:15.709 13:04:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.709 13:04:56 -- scripts/common.sh@364 -- # decimal 1 00:20:15.709 13:04:56 -- scripts/common.sh@352 -- # local d=1 00:20:15.709 13:04:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.709 13:04:56 -- scripts/common.sh@354 -- # echo 1 00:20:15.709 13:04:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:15.709 13:04:56 -- scripts/common.sh@365 -- # decimal 2 00:20:15.709 13:04:56 -- scripts/common.sh@352 -- # local d=2 00:20:15.709 13:04:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.709 13:04:56 -- scripts/common.sh@354 -- # echo 2 00:20:15.709 13:04:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:15.709 13:04:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:15.709 13:04:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:15.709 13:04:56 -- scripts/common.sh@367 -- # return 0 00:20:15.709 13:04:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.709 13:04:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:15.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.709 --rc genhtml_branch_coverage=1 00:20:15.709 --rc genhtml_function_coverage=1 00:20:15.709 --rc genhtml_legend=1 00:20:15.709 --rc geninfo_all_blocks=1 00:20:15.709 --rc geninfo_unexecuted_blocks=1 00:20:15.709 00:20:15.709 ' 00:20:15.709 13:04:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:15.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.709 --rc genhtml_branch_coverage=1 00:20:15.709 --rc genhtml_function_coverage=1 00:20:15.709 --rc genhtml_legend=1 00:20:15.709 --rc geninfo_all_blocks=1 00:20:15.709 --rc geninfo_unexecuted_blocks=1 00:20:15.709 00:20:15.709 ' 00:20:15.709 13:04:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:15.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.709 --rc genhtml_branch_coverage=1 00:20:15.709 --rc genhtml_function_coverage=1 00:20:15.709 --rc genhtml_legend=1 00:20:15.709 --rc geninfo_all_blocks=1 00:20:15.709 --rc geninfo_unexecuted_blocks=1 00:20:15.709 00:20:15.709 ' 00:20:15.709 13:04:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:15.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.709 --rc genhtml_branch_coverage=1 00:20:15.709 --rc genhtml_function_coverage=1 00:20:15.709 --rc genhtml_legend=1 00:20:15.709 --rc geninfo_all_blocks=1 00:20:15.709 --rc geninfo_unexecuted_blocks=1 00:20:15.709 00:20:15.709 ' 00:20:15.709 13:04:56 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.709 13:04:56 -- nvmf/common.sh@7 -- # uname -s 00:20:15.709 13:04:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.709 13:04:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.709 13:04:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.709 13:04:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.709 13:04:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.709 13:04:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.709 13:04:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.709 13:04:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.709 13:04:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.709 13:04:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.968 13:04:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:15.968 13:04:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:15.968 13:04:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.968 13:04:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.968 13:04:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.968 13:04:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.968 13:04:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.968 13:04:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.968 13:04:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.969 13:04:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.969 13:04:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.969 13:04:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.969 13:04:56 -- paths/export.sh@5 -- # export PATH 00:20:15.969 13:04:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.969 13:04:56 -- nvmf/common.sh@46 -- # : 0 00:20:15.969 13:04:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:15.969 13:04:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:15.969 13:04:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:15.969 13:04:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.969 13:04:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.969 13:04:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:15.969 13:04:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:15.969 13:04:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:15.969 13:04:56 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:15.969 13:04:56 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:15.969 13:04:56 -- host/identify.sh@14 -- # nvmftestinit 00:20:15.969 13:04:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:15.969 13:04:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.969 13:04:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:15.969 13:04:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:15.969 13:04:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:15.969 13:04:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.969 13:04:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.969 13:04:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.969 13:04:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:15.969 13:04:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:15.969 13:04:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:15.969 13:04:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:15.969 13:04:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:15.969 13:04:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:15.969 13:04:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.969 13:04:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.969 13:04:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:15.969 13:04:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:15.969 13:04:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.969 13:04:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.969 13:04:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.969 13:04:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.969 13:04:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.969 13:04:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.969 13:04:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.969 13:04:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.969 13:04:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:15.969 13:04:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:15.969 Cannot find device "nvmf_tgt_br" 00:20:15.969 13:04:56 -- nvmf/common.sh@154 -- # true 00:20:15.969 13:04:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.969 Cannot find device "nvmf_tgt_br2" 00:20:15.969 13:04:56 -- nvmf/common.sh@155 -- # true 00:20:15.969 13:04:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:15.969 13:04:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:15.969 Cannot find device "nvmf_tgt_br" 00:20:15.969 13:04:56 -- nvmf/common.sh@157 -- # true 00:20:15.969 13:04:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:15.969 Cannot find device "nvmf_tgt_br2" 00:20:15.969 13:04:56 -- nvmf/common.sh@158 -- # true 00:20:15.969 13:04:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:15.969 13:04:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:15.969 13:04:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.969 13:04:56 -- nvmf/common.sh@161 -- # true 00:20:15.969 13:04:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.969 13:04:56 -- nvmf/common.sh@162 -- # true 00:20:15.969 13:04:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.969 13:04:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:15.969 13:04:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:15.969 13:04:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:15.969 13:04:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:15.969 13:04:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:15.969 13:04:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:15.969 13:04:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:15.969 13:04:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:15.969 13:04:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:15.969 13:04:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:16.228 13:04:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:16.228 13:04:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:16.228 13:04:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:16.228 13:04:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:16.228 13:04:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:16.228 13:04:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:16.228 13:04:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:16.228 13:04:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:16.228 13:04:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:16.228 13:04:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:16.228 13:04:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:16.228 13:04:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:16.228 13:04:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:16.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:16.228 00:20:16.228 --- 10.0.0.2 ping statistics --- 00:20:16.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.228 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:16.228 13:04:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:16.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:16.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:20:16.228 00:20:16.228 --- 10.0.0.3 ping statistics --- 00:20:16.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.228 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:16.228 13:04:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:16.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:20:16.228 00:20:16.228 --- 10.0.0.1 ping statistics --- 00:20:16.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.228 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:16.228 13:04:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.228 13:04:56 -- nvmf/common.sh@421 -- # return 0 00:20:16.228 13:04:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:16.228 13:04:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.228 13:04:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:16.228 13:04:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:16.228 13:04:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.228 13:04:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:16.228 13:04:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:16.228 13:04:56 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:16.228 13:04:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.228 13:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:16.228 13:04:56 -- host/identify.sh@19 -- # nvmfpid=93162 00:20:16.228 13:04:56 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:16.228 13:04:56 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:16.228 13:04:56 -- host/identify.sh@23 -- # waitforlisten 93162 00:20:16.228 13:04:56 -- common/autotest_common.sh@829 -- # '[' -z 93162 ']' 00:20:16.228 13:04:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.228 13:04:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.228 13:04:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.228 13:04:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.228 13:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:16.228 [2024-12-13 13:04:56.921553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:16.228 [2024-12-13 13:04:56.921646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.487 [2024-12-13 13:04:57.061363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.487 [2024-12-13 13:04:57.136179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:16.487 [2024-12-13 13:04:57.136326] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.487 [2024-12-13 13:04:57.136338] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.487 [2024-12-13 13:04:57.136355] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.487 [2024-12-13 13:04:57.136507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.487 [2024-12-13 13:04:57.137395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.487 [2024-12-13 13:04:57.137516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.487 [2024-12-13 13:04:57.137520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.423 13:04:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.423 13:04:57 -- common/autotest_common.sh@862 -- # return 0 00:20:17.423 13:04:57 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.423 13:04:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.423 13:04:57 -- common/autotest_common.sh@10 -- # set +x 00:20:17.423 [2024-12-13 13:04:57.944379] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.423 13:04:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.423 13:04:57 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:17.423 13:04:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.423 13:04:57 -- common/autotest_common.sh@10 -- # set +x 00:20:17.423 13:04:57 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:17.423 13:04:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.423 13:04:57 -- common/autotest_common.sh@10 -- # set +x 00:20:17.423 Malloc0 00:20:17.423 13:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.423 13:04:58 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:17.423 13:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.423 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:17.423 13:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.423 13:04:58 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:17.423 13:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.423 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:17.423 13:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.423 13:04:58 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:17.423 13:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.423 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:17.423 [2024-12-13 13:04:58.047333] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.423 13:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.423 13:04:58 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:17.423 13:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.423 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:17.423 13:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.423 13:04:58 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:17.423 13:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.423 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:17.423 [2024-12-13 13:04:58.063081] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:17.423 [ 00:20:17.423 { 00:20:17.423 "allow_any_host": true, 00:20:17.423 "hosts": [], 00:20:17.423 "listen_addresses": [ 00:20:17.423 { 00:20:17.424 "adrfam": "IPv4", 00:20:17.424 "traddr": "10.0.0.2", 00:20:17.424 "transport": "TCP", 00:20:17.424 "trsvcid": "4420", 00:20:17.424 "trtype": "TCP" 00:20:17.424 } 00:20:17.424 ], 00:20:17.424 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:17.424 "subtype": "Discovery" 00:20:17.424 }, 00:20:17.424 { 00:20:17.424 "allow_any_host": true, 00:20:17.424 "hosts": [], 00:20:17.424 "listen_addresses": [ 00:20:17.424 { 00:20:17.424 "adrfam": "IPv4", 00:20:17.424 "traddr": "10.0.0.2", 00:20:17.424 "transport": "TCP", 00:20:17.424 "trsvcid": "4420", 00:20:17.424 "trtype": "TCP" 00:20:17.424 } 00:20:17.424 ], 00:20:17.424 "max_cntlid": 65519, 00:20:17.424 "max_namespaces": 32, 00:20:17.424 "min_cntlid": 1, 00:20:17.424 "model_number": "SPDK bdev Controller", 00:20:17.424 "namespaces": [ 00:20:17.424 { 00:20:17.424 "bdev_name": "Malloc0", 00:20:17.424 "eui64": "ABCDEF0123456789", 00:20:17.424 "name": "Malloc0", 00:20:17.424 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:17.424 "nsid": 1, 00:20:17.424 "uuid": "1ddce64d-362e-4265-b846-c7d92ca0f104" 00:20:17.424 } 00:20:17.424 ], 00:20:17.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.424 "serial_number": "SPDK00000000000001", 00:20:17.424 "subtype": "NVMe" 00:20:17.424 } 00:20:17.424 ] 00:20:17.424 13:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.424 13:04:58 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:17.424 [2024-12-13 13:04:58.097822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:17.424 [2024-12-13 13:04:58.097860] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93215 ] 00:20:17.686 [2024-12-13 13:04:58.230591] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:17.686 [2024-12-13 13:04:58.230640] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:17.686 [2024-12-13 13:04:58.230646] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:17.686 [2024-12-13 13:04:58.230655] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:17.686 [2024-12-13 13:04:58.230664] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:17.686 [2024-12-13 13:04:58.230786] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:17.686 [2024-12-13 13:04:58.230837] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2251540 0 00:20:17.686 [2024-12-13 13:04:58.243848] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:17.686 [2024-12-13 13:04:58.243870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:17.686 [2024-12-13 13:04:58.243876] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:17.686 [2024-12-13 13:04:58.243880] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:17.686 [2024-12-13 13:04:58.243924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.243931] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.243935] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.686 [2024-12-13 13:04:58.243947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:17.686 [2024-12-13 13:04:58.244010] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.686 [2024-12-13 13:04:58.251820] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.686 [2024-12-13 13:04:58.251837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.686 [2024-12-13 13:04:58.251841] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.251847] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a220) on tqpair=0x2251540 00:20:17.686 [2024-12-13 13:04:58.251865] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:17.686 [2024-12-13 13:04:58.251872] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:17.686 [2024-12-13 13:04:58.251878] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:17.686 [2024-12-13 13:04:58.251894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.251898] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.251902] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.686 [2024-12-13 13:04:58.251911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.686 [2024-12-13 13:04:58.251969] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.686 [2024-12-13 13:04:58.252049] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.686 [2024-12-13 13:04:58.252057] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.686 [2024-12-13 13:04:58.252061] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.252065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a220) on tqpair=0x2251540 00:20:17.686 [2024-12-13 13:04:58.252072] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:17.686 [2024-12-13 13:04:58.252081] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:17.686 [2024-12-13 13:04:58.252089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.252094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.252098] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.686 [2024-12-13 13:04:58.252106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.686 [2024-12-13 13:04:58.252127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.686 [2024-12-13 13:04:58.252183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.686 [2024-12-13 13:04:58.252190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.686 [2024-12-13 13:04:58.252194] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.252214] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a220) on tqpair=0x2251540 00:20:17.686 [2024-12-13 13:04:58.252221] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:17.686 [2024-12-13 13:04:58.252230] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:17.686 [2024-12-13 13:04:58.252237] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.252242] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.252246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.686 [2024-12-13 13:04:58.252253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.686 [2024-12-13 13:04:58.252272] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.686 [2024-12-13 13:04:58.252324] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.686 [2024-12-13 13:04:58.252331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.686 [2024-12-13 13:04:58.252335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.686 [2024-12-13 13:04:58.252340] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a220) on tqpair=0x2251540 00:20:17.687 [2024-12-13 13:04:58.252347] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:17.687 [2024-12-13 13:04:58.252357] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252362] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.252373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.687 [2024-12-13 13:04:58.252406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.687 [2024-12-13 13:04:58.252455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.687 [2024-12-13 13:04:58.252462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.687 [2024-12-13 13:04:58.252466] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252470] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a220) on tqpair=0x2251540 00:20:17.687 [2024-12-13 13:04:58.252477] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:17.687 [2024-12-13 13:04:58.252482] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:17.687 [2024-12-13 13:04:58.252490] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:17.687 [2024-12-13 13:04:58.252596] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:17.687 [2024-12-13 13:04:58.252607] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:17.687 [2024-12-13 13:04:58.252616] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.252633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.687 [2024-12-13 13:04:58.252653] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.687 [2024-12-13 13:04:58.252704] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.687 [2024-12-13 13:04:58.252715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.687 [2024-12-13 13:04:58.252720] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252724] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a220) on tqpair=0x2251540 00:20:17.687 [2024-12-13 13:04:58.252731] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:17.687 [2024-12-13 13:04:58.252752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252763] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.252771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.687 [2024-12-13 13:04:58.252791] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.687 [2024-12-13 13:04:58.252853] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.687 [2024-12-13 13:04:58.252860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.687 [2024-12-13 13:04:58.252864] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252869] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a220) on tqpair=0x2251540 00:20:17.687 [2024-12-13 13:04:58.252875] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:17.687 [2024-12-13 13:04:58.252881] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:17.687 [2024-12-13 13:04:58.252890] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:17.687 [2024-12-13 13:04:58.252906] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:17.687 [2024-12-13 13:04:58.252917] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252922] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.252926] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.252934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.687 [2024-12-13 13:04:58.252954] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.687 [2024-12-13 13:04:58.253061] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.687 [2024-12-13 13:04:58.253069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.687 [2024-12-13 13:04:58.253073] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253077] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2251540): datao=0, datal=4096, cccid=0 00:20:17.687 [2024-12-13 13:04:58.253083] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a220) on tqpair(0x2251540): expected_datao=0, payload_size=4096 00:20:17.687 [2024-12-13 13:04:58.253092] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253097] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.687 [2024-12-13 13:04:58.253113] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.687 [2024-12-13 13:04:58.253117] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a220) on tqpair=0x2251540 00:20:17.687 [2024-12-13 13:04:58.253132] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:17.687 [2024-12-13 13:04:58.253138] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:17.687 [2024-12-13 13:04:58.253143] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:17.687 [2024-12-13 13:04:58.253148] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:17.687 [2024-12-13 13:04:58.253154] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:17.687 [2024-12-13 13:04:58.253159] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:17.687 [2024-12-13 13:04:58.253172] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:17.687 [2024-12-13 13:04:58.253181] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253189] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.253197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:17.687 [2024-12-13 13:04:58.253218] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.687 [2024-12-13 13:04:58.253276] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.687 [2024-12-13 13:04:58.253284] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.687 [2024-12-13 13:04:58.253288] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253292] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a220) on tqpair=0x2251540 00:20:17.687 [2024-12-13 13:04:58.253301] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253306] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.253317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.687 [2024-12-13 13:04:58.253323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253327] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.253338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.687 [2024-12-13 13:04:58.253344] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253348] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.253358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.687 [2024-12-13 13:04:58.253364] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253368] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253372] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.253378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.687 [2024-12-13 13:04:58.253385] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:17.687 [2024-12-13 13:04:58.253398] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:17.687 [2024-12-13 13:04:58.253406] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253410] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.687 [2024-12-13 13:04:58.253414] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2251540) 00:20:17.687 [2024-12-13 13:04:58.253421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.687 [2024-12-13 13:04:58.253442] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a220, cid 0, qid 0 00:20:17.687 [2024-12-13 13:04:58.253450] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a380, cid 1, qid 0 00:20:17.688 [2024-12-13 13:04:58.253455] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a4e0, cid 2, qid 0 00:20:17.688 [2024-12-13 13:04:58.253460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.688 [2024-12-13 13:04:58.253465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a7a0, cid 4, qid 0 00:20:17.688 [2024-12-13 13:04:58.253571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.688 [2024-12-13 13:04:58.253578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.688 [2024-12-13 13:04:58.253582] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253587] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a7a0) on tqpair=0x2251540 00:20:17.688 [2024-12-13 13:04:58.253594] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:17.688 [2024-12-13 13:04:58.253600] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:17.688 [2024-12-13 13:04:58.253611] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253621] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2251540) 00:20:17.688 [2024-12-13 13:04:58.253628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.688 [2024-12-13 13:04:58.253646] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a7a0, cid 4, qid 0 00:20:17.688 [2024-12-13 13:04:58.253707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.688 [2024-12-13 13:04:58.253714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.688 [2024-12-13 13:04:58.253718] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253722] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2251540): datao=0, datal=4096, cccid=4 00:20:17.688 [2024-12-13 13:04:58.253728] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a7a0) on tqpair(0x2251540): expected_datao=0, payload_size=4096 00:20:17.688 [2024-12-13 13:04:58.253736] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253741] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253750] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.688 [2024-12-13 13:04:58.253770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.688 [2024-12-13 13:04:58.253778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a7a0) on tqpair=0x2251540 00:20:17.688 [2024-12-13 13:04:58.253798] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:17.688 [2024-12-13 13:04:58.253841] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253852] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253856] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2251540) 00:20:17.688 [2024-12-13 13:04:58.253864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.688 [2024-12-13 13:04:58.253873] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253877] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.253881] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2251540) 00:20:17.688 [2024-12-13 13:04:58.253887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.688 [2024-12-13 13:04:58.253918] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a7a0, cid 4, qid 0 00:20:17.688 [2024-12-13 13:04:58.253925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a900, cid 5, qid 0 00:20:17.688 [2024-12-13 13:04:58.254045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.688 [2024-12-13 13:04:58.254058] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.688 [2024-12-13 13:04:58.254062] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.254066] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2251540): datao=0, datal=1024, cccid=4 00:20:17.688 [2024-12-13 13:04:58.254071] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a7a0) on tqpair(0x2251540): expected_datao=0, payload_size=1024 00:20:17.688 [2024-12-13 13:04:58.254080] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.254084] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.254090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.688 [2024-12-13 13:04:58.254096] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.688 [2024-12-13 13:04:58.254100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.254104] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a900) on tqpair=0x2251540 00:20:17.688 [2024-12-13 13:04:58.298794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.688 [2024-12-13 13:04:58.298813] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.688 [2024-12-13 13:04:58.298818] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.298823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a7a0) on tqpair=0x2251540 00:20:17.688 [2024-12-13 13:04:58.298837] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.298842] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.298846] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2251540) 00:20:17.688 [2024-12-13 13:04:58.298855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.688 [2024-12-13 13:04:58.298888] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a7a0, cid 4, qid 0 00:20:17.688 [2024-12-13 13:04:58.299001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.688 [2024-12-13 13:04:58.299010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.688 [2024-12-13 13:04:58.299014] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.299019] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2251540): datao=0, datal=3072, cccid=4 00:20:17.688 [2024-12-13 13:04:58.299024] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a7a0) on tqpair(0x2251540): expected_datao=0, payload_size=3072 00:20:17.688 [2024-12-13 13:04:58.299032] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.299037] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.299046] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.688 [2024-12-13 13:04:58.299053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.688 [2024-12-13 13:04:58.299057] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.299061] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a7a0) on tqpair=0x2251540 00:20:17.688 [2024-12-13 13:04:58.299073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.299078] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.299082] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2251540) 00:20:17.688 [2024-12-13 13:04:58.299090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.688 [2024-12-13 13:04:58.299148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a7a0, cid 4, qid 0 00:20:17.688 [2024-12-13 13:04:58.299222] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.688 [2024-12-13 13:04:58.299229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.688 [2024-12-13 13:04:58.299233] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.299238] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2251540): datao=0, datal=8, cccid=4 00:20:17.688 [2024-12-13 13:04:58.299243] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a7a0) on tqpair(0x2251540): expected_datao=0, payload_size=8 00:20:17.688 [2024-12-13 13:04:58.299251] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.299256] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.339847] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.688 [2024-12-13 13:04:58.339869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.688 [2024-12-13 13:04:58.339892] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.688 [2024-12-13 13:04:58.339897] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a7a0) on tqpair=0x2251540 00:20:17.688 ===================================================== 00:20:17.688 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:17.688 ===================================================== 00:20:17.688 Controller Capabilities/Features 00:20:17.688 ================================ 00:20:17.688 Vendor ID: 0000 00:20:17.688 Subsystem Vendor ID: 0000 00:20:17.688 Serial Number: .................... 00:20:17.688 Model Number: ........................................ 00:20:17.688 Firmware Version: 24.01.1 00:20:17.688 Recommended Arb Burst: 0 00:20:17.688 IEEE OUI Identifier: 00 00 00 00:20:17.688 Multi-path I/O 00:20:17.688 May have multiple subsystem ports: No 00:20:17.688 May have multiple controllers: No 00:20:17.688 Associated with SR-IOV VF: No 00:20:17.688 Max Data Transfer Size: 131072 00:20:17.688 Max Number of Namespaces: 0 00:20:17.688 Max Number of I/O Queues: 1024 00:20:17.688 NVMe Specification Version (VS): 1.3 00:20:17.688 NVMe Specification Version (Identify): 1.3 00:20:17.688 Maximum Queue Entries: 128 00:20:17.688 Contiguous Queues Required: Yes 00:20:17.688 Arbitration Mechanisms Supported 00:20:17.688 Weighted Round Robin: Not Supported 00:20:17.688 Vendor Specific: Not Supported 00:20:17.688 Reset Timeout: 15000 ms 00:20:17.688 Doorbell Stride: 4 bytes 00:20:17.688 NVM Subsystem Reset: Not Supported 00:20:17.688 Command Sets Supported 00:20:17.688 NVM Command Set: Supported 00:20:17.688 Boot Partition: Not Supported 00:20:17.688 Memory Page Size Minimum: 4096 bytes 00:20:17.689 Memory Page Size Maximum: 4096 bytes 00:20:17.689 Persistent Memory Region: Not Supported 00:20:17.689 Optional Asynchronous Events Supported 00:20:17.689 Namespace Attribute Notices: Not Supported 00:20:17.689 Firmware Activation Notices: Not Supported 00:20:17.689 ANA Change Notices: Not Supported 00:20:17.689 PLE Aggregate Log Change Notices: Not Supported 00:20:17.689 LBA Status Info Alert Notices: Not Supported 00:20:17.689 EGE Aggregate Log Change Notices: Not Supported 00:20:17.689 Normal NVM Subsystem Shutdown event: Not Supported 00:20:17.689 Zone Descriptor Change Notices: Not Supported 00:20:17.689 Discovery Log Change Notices: Supported 00:20:17.689 Controller Attributes 00:20:17.689 128-bit Host Identifier: Not Supported 00:20:17.689 Non-Operational Permissive Mode: Not Supported 00:20:17.689 NVM Sets: Not Supported 00:20:17.689 Read Recovery Levels: Not Supported 00:20:17.689 Endurance Groups: Not Supported 00:20:17.689 Predictable Latency Mode: Not Supported 00:20:17.689 Traffic Based Keep ALive: Not Supported 00:20:17.689 Namespace Granularity: Not Supported 00:20:17.689 SQ Associations: Not Supported 00:20:17.689 UUID List: Not Supported 00:20:17.689 Multi-Domain Subsystem: Not Supported 00:20:17.689 Fixed Capacity Management: Not Supported 00:20:17.689 Variable Capacity Management: Not Supported 00:20:17.689 Delete Endurance Group: Not Supported 00:20:17.689 Delete NVM Set: Not Supported 00:20:17.689 Extended LBA Formats Supported: Not Supported 00:20:17.689 Flexible Data Placement Supported: Not Supported 00:20:17.689 00:20:17.689 Controller Memory Buffer Support 00:20:17.689 ================================ 00:20:17.689 Supported: No 00:20:17.689 00:20:17.689 Persistent Memory Region Support 00:20:17.689 ================================ 00:20:17.689 Supported: No 00:20:17.689 00:20:17.689 Admin Command Set Attributes 00:20:17.689 ============================ 00:20:17.689 Security Send/Receive: Not Supported 00:20:17.689 Format NVM: Not Supported 00:20:17.689 Firmware Activate/Download: Not Supported 00:20:17.689 Namespace Management: Not Supported 00:20:17.689 Device Self-Test: Not Supported 00:20:17.689 Directives: Not Supported 00:20:17.689 NVMe-MI: Not Supported 00:20:17.689 Virtualization Management: Not Supported 00:20:17.689 Doorbell Buffer Config: Not Supported 00:20:17.689 Get LBA Status Capability: Not Supported 00:20:17.689 Command & Feature Lockdown Capability: Not Supported 00:20:17.689 Abort Command Limit: 1 00:20:17.689 Async Event Request Limit: 4 00:20:17.689 Number of Firmware Slots: N/A 00:20:17.689 Firmware Slot 1 Read-Only: N/A 00:20:17.689 Firmware Activation Without Reset: N/A 00:20:17.689 Multiple Update Detection Support: N/A 00:20:17.689 Firmware Update Granularity: No Information Provided 00:20:17.689 Per-Namespace SMART Log: No 00:20:17.689 Asymmetric Namespace Access Log Page: Not Supported 00:20:17.689 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:17.689 Command Effects Log Page: Not Supported 00:20:17.689 Get Log Page Extended Data: Supported 00:20:17.689 Telemetry Log Pages: Not Supported 00:20:17.689 Persistent Event Log Pages: Not Supported 00:20:17.689 Supported Log Pages Log Page: May Support 00:20:17.689 Commands Supported & Effects Log Page: Not Supported 00:20:17.689 Feature Identifiers & Effects Log Page:May Support 00:20:17.689 NVMe-MI Commands & Effects Log Page: May Support 00:20:17.689 Data Area 4 for Telemetry Log: Not Supported 00:20:17.689 Error Log Page Entries Supported: 128 00:20:17.689 Keep Alive: Not Supported 00:20:17.689 00:20:17.689 NVM Command Set Attributes 00:20:17.689 ========================== 00:20:17.689 Submission Queue Entry Size 00:20:17.689 Max: 1 00:20:17.689 Min: 1 00:20:17.689 Completion Queue Entry Size 00:20:17.689 Max: 1 00:20:17.689 Min: 1 00:20:17.689 Number of Namespaces: 0 00:20:17.689 Compare Command: Not Supported 00:20:17.689 Write Uncorrectable Command: Not Supported 00:20:17.689 Dataset Management Command: Not Supported 00:20:17.689 Write Zeroes Command: Not Supported 00:20:17.689 Set Features Save Field: Not Supported 00:20:17.689 Reservations: Not Supported 00:20:17.689 Timestamp: Not Supported 00:20:17.689 Copy: Not Supported 00:20:17.689 Volatile Write Cache: Not Present 00:20:17.689 Atomic Write Unit (Normal): 1 00:20:17.689 Atomic Write Unit (PFail): 1 00:20:17.689 Atomic Compare & Write Unit: 1 00:20:17.689 Fused Compare & Write: Supported 00:20:17.689 Scatter-Gather List 00:20:17.689 SGL Command Set: Supported 00:20:17.689 SGL Keyed: Supported 00:20:17.689 SGL Bit Bucket Descriptor: Not Supported 00:20:17.689 SGL Metadata Pointer: Not Supported 00:20:17.689 Oversized SGL: Not Supported 00:20:17.689 SGL Metadata Address: Not Supported 00:20:17.689 SGL Offset: Supported 00:20:17.689 Transport SGL Data Block: Not Supported 00:20:17.689 Replay Protected Memory Block: Not Supported 00:20:17.689 00:20:17.689 Firmware Slot Information 00:20:17.689 ========================= 00:20:17.689 Active slot: 0 00:20:17.689 00:20:17.689 00:20:17.689 Error Log 00:20:17.689 ========= 00:20:17.689 00:20:17.689 Active Namespaces 00:20:17.689 ================= 00:20:17.689 Discovery Log Page 00:20:17.689 ================== 00:20:17.689 Generation Counter: 2 00:20:17.689 Number of Records: 2 00:20:17.689 Record Format: 0 00:20:17.689 00:20:17.689 Discovery Log Entry 0 00:20:17.689 ---------------------- 00:20:17.689 Transport Type: 3 (TCP) 00:20:17.689 Address Family: 1 (IPv4) 00:20:17.689 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:17.689 Entry Flags: 00:20:17.689 Duplicate Returned Information: 1 00:20:17.689 Explicit Persistent Connection Support for Discovery: 1 00:20:17.689 Transport Requirements: 00:20:17.689 Secure Channel: Not Required 00:20:17.689 Port ID: 0 (0x0000) 00:20:17.689 Controller ID: 65535 (0xffff) 00:20:17.689 Admin Max SQ Size: 128 00:20:17.689 Transport Service Identifier: 4420 00:20:17.689 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:17.689 Transport Address: 10.0.0.2 00:20:17.689 Discovery Log Entry 1 00:20:17.689 ---------------------- 00:20:17.689 Transport Type: 3 (TCP) 00:20:17.689 Address Family: 1 (IPv4) 00:20:17.689 Subsystem Type: 2 (NVM Subsystem) 00:20:17.689 Entry Flags: 00:20:17.689 Duplicate Returned Information: 0 00:20:17.689 Explicit Persistent Connection Support for Discovery: 0 00:20:17.689 Transport Requirements: 00:20:17.689 Secure Channel: Not Required 00:20:17.689 Port ID: 0 (0x0000) 00:20:17.689 Controller ID: 65535 (0xffff) 00:20:17.689 Admin Max SQ Size: 128 00:20:17.689 Transport Service Identifier: 4420 00:20:17.689 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:17.689 Transport Address: 10.0.0.2 [2024-12-13 13:04:58.340018] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:17.689 [2024-12-13 13:04:58.340037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.689 [2024-12-13 13:04:58.340044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.689 [2024-12-13 13:04:58.340050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.689 [2024-12-13 13:04:58.340056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.689 [2024-12-13 13:04:58.340081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.689 [2024-12-13 13:04:58.340101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.689 [2024-12-13 13:04:58.340106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.689 [2024-12-13 13:04:58.340115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.689 [2024-12-13 13:04:58.340142] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.689 [2024-12-13 13:04:58.340205] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.689 [2024-12-13 13:04:58.340213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.689 [2024-12-13 13:04:58.340217] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.689 [2024-12-13 13:04:58.340221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.689 [2024-12-13 13:04:58.340230] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.689 [2024-12-13 13:04:58.340235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.689 [2024-12-13 13:04:58.340239] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.689 [2024-12-13 13:04:58.340246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.689 [2024-12-13 13:04:58.340270] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.689 [2024-12-13 13:04:58.340335] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.689 [2024-12-13 13:04:58.340342] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.689 [2024-12-13 13:04:58.340345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340350] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.340356] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:17.690 [2024-12-13 13:04:58.340361] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:17.690 [2024-12-13 13:04:58.340371] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340375] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340379] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.340387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.340405] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.340456] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.340463] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.340467] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340471] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.340483] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340487] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340491] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.340499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.340516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.340565] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.340572] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.340576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340580] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.340591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.340607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.340625] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.340673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.340679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.340683] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340687] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.340699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340703] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340707] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.340715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.340732] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.340792] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.340801] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.340805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340809] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.340820] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340825] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340829] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.340836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.340856] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.340907] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.340914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.340918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.340933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340938] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.340942] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.340949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.340967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.341021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.341037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.341042] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341046] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.341058] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.341075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.341094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.341144] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.341156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.341160] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341164] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.341176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341185] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.341192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.341211] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.341264] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.341270] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.341274] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.341290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341294] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341298] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.341306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.341323] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.341373] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.341380] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.341384] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341388] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.341399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.341415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.341432] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.341483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.341494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.341498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.341514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341519] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.341530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.341549] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.690 [2024-12-13 13:04:58.341599] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.690 [2024-12-13 13:04:58.341606] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.690 [2024-12-13 13:04:58.341609] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341614] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.690 [2024-12-13 13:04:58.341625] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341629] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.690 [2024-12-13 13:04:58.341633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.690 [2024-12-13 13:04:58.341641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.690 [2024-12-13 13:04:58.341659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.691 [2024-12-13 13:04:58.341709] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.691 [2024-12-13 13:04:58.341716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.691 [2024-12-13 13:04:58.341720] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.341724] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.691 [2024-12-13 13:04:58.341735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.341740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.341754] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.691 [2024-12-13 13:04:58.341777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.691 [2024-12-13 13:04:58.341798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.691 [2024-12-13 13:04:58.341850] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.691 [2024-12-13 13:04:58.341857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.691 [2024-12-13 13:04:58.341862] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.341866] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.691 [2024-12-13 13:04:58.341878] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.341883] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.341887] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.691 [2024-12-13 13:04:58.341894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.691 [2024-12-13 13:04:58.341913] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.691 [2024-12-13 13:04:58.341962] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.691 [2024-12-13 13:04:58.341969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.691 [2024-12-13 13:04:58.341973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.341977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.691 [2024-12-13 13:04:58.341988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.341993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.341997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.691 [2024-12-13 13:04:58.342005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.691 [2024-12-13 13:04:58.342023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.691 [2024-12-13 13:04:58.342076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.691 [2024-12-13 13:04:58.342083] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.691 [2024-12-13 13:04:58.342087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.691 [2024-12-13 13:04:58.342102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.691 [2024-12-13 13:04:58.342133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.691 [2024-12-13 13:04:58.342150] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.691 [2024-12-13 13:04:58.342201] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.691 [2024-12-13 13:04:58.342207] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.691 [2024-12-13 13:04:58.342211] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342215] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.691 [2024-12-13 13:04:58.342227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342232] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342236] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.691 [2024-12-13 13:04:58.342243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.691 [2024-12-13 13:04:58.342260] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.691 [2024-12-13 13:04:58.342307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.691 [2024-12-13 13:04:58.342314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.691 [2024-12-13 13:04:58.342318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.691 [2024-12-13 13:04:58.342333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342338] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342342] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.691 [2024-12-13 13:04:58.342350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.691 [2024-12-13 13:04:58.342367] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.691 [2024-12-13 13:04:58.342417] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.691 [2024-12-13 13:04:58.342424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.691 [2024-12-13 13:04:58.342427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342432] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.691 [2024-12-13 13:04:58.342443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342448] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342451] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.691 [2024-12-13 13:04:58.342459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.691 [2024-12-13 13:04:58.342476] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.691 [2024-12-13 13:04:58.342527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.691 [2024-12-13 13:04:58.342534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.691 [2024-12-13 13:04:58.342538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.691 [2024-12-13 13:04:58.342542] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.706 [2024-12-13 13:04:58.342553] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.706 [2024-12-13 13:04:58.342569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.706 [2024-12-13 13:04:58.342586] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.706 [2024-12-13 13:04:58.342637] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.706 [2024-12-13 13:04:58.342644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.706 [2024-12-13 13:04:58.342648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.706 [2024-12-13 13:04:58.342663] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342668] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342672] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.706 [2024-12-13 13:04:58.342679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.706 [2024-12-13 13:04:58.342697] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.706 [2024-12-13 13:04:58.342745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.706 [2024-12-13 13:04:58.342752] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.706 [2024-12-13 13:04:58.342756] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342760] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.706 [2024-12-13 13:04:58.342785] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342811] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.706 [2024-12-13 13:04:58.342818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.706 [2024-12-13 13:04:58.342839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.706 [2024-12-13 13:04:58.342891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.706 [2024-12-13 13:04:58.342898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.706 [2024-12-13 13:04:58.342902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.706 [2024-12-13 13:04:58.342919] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342924] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.342928] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.706 [2024-12-13 13:04:58.342935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.706 [2024-12-13 13:04:58.342953] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.706 [2024-12-13 13:04:58.343005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.706 [2024-12-13 13:04:58.343013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.706 [2024-12-13 13:04:58.343017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.343022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.706 [2024-12-13 13:04:58.343034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.343039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.343043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.706 [2024-12-13 13:04:58.343050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.706 [2024-12-13 13:04:58.343068] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.706 [2024-12-13 13:04:58.343141] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.706 [2024-12-13 13:04:58.343149] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.706 [2024-12-13 13:04:58.343153] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.706 [2024-12-13 13:04:58.343158] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.707 [2024-12-13 13:04:58.343184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343189] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343193] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.707 [2024-12-13 13:04:58.343201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.707 [2024-12-13 13:04:58.343220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.707 [2024-12-13 13:04:58.343273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.707 [2024-12-13 13:04:58.343281] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.707 [2024-12-13 13:04:58.343284] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.707 [2024-12-13 13:04:58.343300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343305] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343309] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.707 [2024-12-13 13:04:58.343317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.707 [2024-12-13 13:04:58.343335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.707 [2024-12-13 13:04:58.343386] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.707 [2024-12-13 13:04:58.343393] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.707 [2024-12-13 13:04:58.343397] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343401] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.707 [2024-12-13 13:04:58.343413] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343418] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343422] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.707 [2024-12-13 13:04:58.343430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.707 [2024-12-13 13:04:58.343447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.707 [2024-12-13 13:04:58.343499] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.707 [2024-12-13 13:04:58.343506] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.707 [2024-12-13 13:04:58.343510] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343515] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.707 [2024-12-13 13:04:58.343526] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343531] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343535] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.707 [2024-12-13 13:04:58.343542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.707 [2024-12-13 13:04:58.343560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.707 [2024-12-13 13:04:58.343626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.707 [2024-12-13 13:04:58.343633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.707 [2024-12-13 13:04:58.343637] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343641] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.707 [2024-12-13 13:04:58.343652] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343657] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343661] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.707 [2024-12-13 13:04:58.343668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.707 [2024-12-13 13:04:58.343685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.707 [2024-12-13 13:04:58.343734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.707 [2024-12-13 13:04:58.343740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.707 [2024-12-13 13:04:58.343744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343748] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.707 [2024-12-13 13:04:58.343759] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.343768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.707 [2024-12-13 13:04:58.343791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.707 [2024-12-13 13:04:58.347797] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.707 [2024-12-13 13:04:58.347821] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.707 [2024-12-13 13:04:58.347829] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.707 [2024-12-13 13:04:58.347833] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.347837] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.707 [2024-12-13 13:04:58.347851] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.347856] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.347860] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2251540) 00:20:17.707 [2024-12-13 13:04:58.347868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.707 [2024-12-13 13:04:58.347892] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a640, cid 3, qid 0 00:20:17.707 [2024-12-13 13:04:58.347962] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.707 [2024-12-13 13:04:58.347969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.707 [2024-12-13 13:04:58.347973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.707 [2024-12-13 13:04:58.347977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a640) on tqpair=0x2251540 00:20:17.707 [2024-12-13 13:04:58.347986] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:17.707 00:20:17.707 13:04:58 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:17.707 [2024-12-13 13:04:58.380956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:17.707 [2024-12-13 13:04:58.381007] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93222 ] 00:20:17.970 [2024-12-13 13:04:58.518364] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:17.970 [2024-12-13 13:04:58.518426] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:17.970 [2024-12-13 13:04:58.518432] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:17.970 [2024-12-13 13:04:58.518441] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:17.970 [2024-12-13 13:04:58.518448] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:17.970 [2024-12-13 13:04:58.518539] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:17.970 [2024-12-13 13:04:58.518586] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2083540 0 00:20:17.970 [2024-12-13 13:04:58.523826] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:17.970 [2024-12-13 13:04:58.523849] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:17.970 [2024-12-13 13:04:58.523854] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:17.970 [2024-12-13 13:04:58.523857] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:17.970 [2024-12-13 13:04:58.523894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.970 [2024-12-13 13:04:58.523899] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.970 [2024-12-13 13:04:58.523903] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.970 [2024-12-13 13:04:58.523913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:17.970 [2024-12-13 13:04:58.523941] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.970 [2024-12-13 13:04:58.531810] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.970 [2024-12-13 13:04:58.531829] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.970 [2024-12-13 13:04:58.531850] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.970 [2024-12-13 13:04:58.531854] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc220) on tqpair=0x2083540 00:20:17.970 [2024-12-13 13:04:58.531863] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:17.970 [2024-12-13 13:04:58.531870] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:17.970 [2024-12-13 13:04:58.531876] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:17.970 [2024-12-13 13:04:58.531889] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.531893] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.531897] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.971 [2024-12-13 13:04:58.531905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.971 [2024-12-13 13:04:58.531931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.971 [2024-12-13 13:04:58.531991] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.971 [2024-12-13 13:04:58.531998] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.971 [2024-12-13 13:04:58.532001] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532005] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc220) on tqpair=0x2083540 00:20:17.971 [2024-12-13 13:04:58.532010] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:17.971 [2024-12-13 13:04:58.532018] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:17.971 [2024-12-13 13:04:58.532025] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532032] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.971 [2024-12-13 13:04:58.532039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.971 [2024-12-13 13:04:58.532056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.971 [2024-12-13 13:04:58.532140] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.971 [2024-12-13 13:04:58.532147] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.971 [2024-12-13 13:04:58.532151] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532155] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc220) on tqpair=0x2083540 00:20:17.971 [2024-12-13 13:04:58.532161] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:17.971 [2024-12-13 13:04:58.532169] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:17.971 [2024-12-13 13:04:58.532176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532180] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.971 [2024-12-13 13:04:58.532191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.971 [2024-12-13 13:04:58.532208] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.971 [2024-12-13 13:04:58.532261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.971 [2024-12-13 13:04:58.532268] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.971 [2024-12-13 13:04:58.532272] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532276] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc220) on tqpair=0x2083540 00:20:17.971 [2024-12-13 13:04:58.532282] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:17.971 [2024-12-13 13:04:58.532308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532316] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.971 [2024-12-13 13:04:58.532323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.971 [2024-12-13 13:04:58.532341] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.971 [2024-12-13 13:04:58.532390] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.971 [2024-12-13 13:04:58.532396] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.971 [2024-12-13 13:04:58.532400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc220) on tqpair=0x2083540 00:20:17.971 [2024-12-13 13:04:58.532410] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:17.971 [2024-12-13 13:04:58.532416] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:17.971 [2024-12-13 13:04:58.532423] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:17.971 [2024-12-13 13:04:58.532529] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:17.971 [2024-12-13 13:04:58.532533] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:17.971 [2024-12-13 13:04:58.532541] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532545] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532549] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.971 [2024-12-13 13:04:58.532556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.971 [2024-12-13 13:04:58.532574] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.971 [2024-12-13 13:04:58.532626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.971 [2024-12-13 13:04:58.532633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.971 [2024-12-13 13:04:58.532637] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532641] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc220) on tqpair=0x2083540 00:20:17.971 [2024-12-13 13:04:58.532662] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:17.971 [2024-12-13 13:04:58.532671] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.971 [2024-12-13 13:04:58.532686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.971 [2024-12-13 13:04:58.532703] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.971 [2024-12-13 13:04:58.532756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.971 [2024-12-13 13:04:58.532763] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.971 [2024-12-13 13:04:58.532767] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532770] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc220) on tqpair=0x2083540 00:20:17.971 [2024-12-13 13:04:58.532776] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:17.971 [2024-12-13 13:04:58.532781] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:17.971 [2024-12-13 13:04:58.532788] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:17.971 [2024-12-13 13:04:58.532802] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:17.971 [2024-12-13 13:04:58.532811] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532815] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532831] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.971 [2024-12-13 13:04:58.532839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.971 [2024-12-13 13:04:58.532859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.971 [2024-12-13 13:04:58.532952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.971 [2024-12-13 13:04:58.532959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.971 [2024-12-13 13:04:58.532963] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532967] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2083540): datao=0, datal=4096, cccid=0 00:20:17.971 [2024-12-13 13:04:58.532971] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20bc220) on tqpair(0x2083540): expected_datao=0, payload_size=4096 00:20:17.971 [2024-12-13 13:04:58.532979] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532984] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.532992] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.971 [2024-12-13 13:04:58.532998] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.971 [2024-12-13 13:04:58.533002] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.533005] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc220) on tqpair=0x2083540 00:20:17.971 [2024-12-13 13:04:58.533014] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:17.971 [2024-12-13 13:04:58.533019] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:17.971 [2024-12-13 13:04:58.533023] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:17.971 [2024-12-13 13:04:58.533027] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:17.971 [2024-12-13 13:04:58.533034] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:17.971 [2024-12-13 13:04:58.533039] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:17.971 [2024-12-13 13:04:58.533052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:17.971 [2024-12-13 13:04:58.533060] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.533064] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.533067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.971 [2024-12-13 13:04:58.533075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:17.971 [2024-12-13 13:04:58.533095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.971 [2024-12-13 13:04:58.533147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.971 [2024-12-13 13:04:58.533154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.971 [2024-12-13 13:04:58.533158] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.533162] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc220) on tqpair=0x2083540 00:20:17.971 [2024-12-13 13:04:58.533169] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.971 [2024-12-13 13:04:58.533173] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533177] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2083540) 00:20:17.972 [2024-12-13 13:04:58.533183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.972 [2024-12-13 13:04:58.533189] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533193] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533196] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2083540) 00:20:17.972 [2024-12-13 13:04:58.533202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.972 [2024-12-13 13:04:58.533208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533211] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533215] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2083540) 00:20:17.972 [2024-12-13 13:04:58.533220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.972 [2024-12-13 13:04:58.533226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533230] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533233] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.972 [2024-12-13 13:04:58.533239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.972 [2024-12-13 13:04:58.533244] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.533256] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.533263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533267] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533270] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2083540) 00:20:17.972 [2024-12-13 13:04:58.533277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.972 [2024-12-13 13:04:58.533297] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc220, cid 0, qid 0 00:20:17.972 [2024-12-13 13:04:58.533304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc380, cid 1, qid 0 00:20:17.972 [2024-12-13 13:04:58.533309] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc4e0, cid 2, qid 0 00:20:17.972 [2024-12-13 13:04:58.533314] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.972 [2024-12-13 13:04:58.533319] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc7a0, cid 4, qid 0 00:20:17.972 [2024-12-13 13:04:58.533407] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.972 [2024-12-13 13:04:58.533414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.972 [2024-12-13 13:04:58.533418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc7a0) on tqpair=0x2083540 00:20:17.972 [2024-12-13 13:04:58.533428] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:17.972 [2024-12-13 13:04:58.533433] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.533441] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.533452] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.533459] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533463] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533466] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2083540) 00:20:17.972 [2024-12-13 13:04:58.533473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:17.972 [2024-12-13 13:04:58.533492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc7a0, cid 4, qid 0 00:20:17.972 [2024-12-13 13:04:58.533549] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.972 [2024-12-13 13:04:58.533556] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.972 [2024-12-13 13:04:58.533560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533564] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc7a0) on tqpair=0x2083540 00:20:17.972 [2024-12-13 13:04:58.533621] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.533631] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.533639] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533643] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533647] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2083540) 00:20:17.972 [2024-12-13 13:04:58.533653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.972 [2024-12-13 13:04:58.533672] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc7a0, cid 4, qid 0 00:20:17.972 [2024-12-13 13:04:58.533739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.972 [2024-12-13 13:04:58.533757] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.972 [2024-12-13 13:04:58.533762] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533766] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2083540): datao=0, datal=4096, cccid=4 00:20:17.972 [2024-12-13 13:04:58.533770] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20bc7a0) on tqpair(0x2083540): expected_datao=0, payload_size=4096 00:20:17.972 [2024-12-13 13:04:58.533778] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533782] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.972 [2024-12-13 13:04:58.533796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.972 [2024-12-13 13:04:58.533800] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533804] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc7a0) on tqpair=0x2083540 00:20:17.972 [2024-12-13 13:04:58.533819] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:17.972 [2024-12-13 13:04:58.533829] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.533839] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.533846] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533866] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533870] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2083540) 00:20:17.972 [2024-12-13 13:04:58.533878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.972 [2024-12-13 13:04:58.533899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc7a0, cid 4, qid 0 00:20:17.972 [2024-12-13 13:04:58.533973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.972 [2024-12-13 13:04:58.533980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.972 [2024-12-13 13:04:58.533984] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.533988] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2083540): datao=0, datal=4096, cccid=4 00:20:17.972 [2024-12-13 13:04:58.533992] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20bc7a0) on tqpair(0x2083540): expected_datao=0, payload_size=4096 00:20:17.972 [2024-12-13 13:04:58.534000] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.534004] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.534012] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.972 [2024-12-13 13:04:58.534018] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.972 [2024-12-13 13:04:58.534022] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.534026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc7a0) on tqpair=0x2083540 00:20:17.972 [2024-12-13 13:04:58.534041] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.534052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.534060] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.534064] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.534068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2083540) 00:20:17.972 [2024-12-13 13:04:58.534075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.972 [2024-12-13 13:04:58.534095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc7a0, cid 4, qid 0 00:20:17.972 [2024-12-13 13:04:58.534158] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.972 [2024-12-13 13:04:58.534165] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.972 [2024-12-13 13:04:58.534169] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.534172] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2083540): datao=0, datal=4096, cccid=4 00:20:17.972 [2024-12-13 13:04:58.534177] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20bc7a0) on tqpair(0x2083540): expected_datao=0, payload_size=4096 00:20:17.972 [2024-12-13 13:04:58.534184] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.534188] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.534196] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.972 [2024-12-13 13:04:58.534217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.972 [2024-12-13 13:04:58.534221] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.972 [2024-12-13 13:04:58.534225] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc7a0) on tqpair=0x2083540 00:20:17.972 [2024-12-13 13:04:58.534234] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.534242] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.534252] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:17.972 [2024-12-13 13:04:58.534258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:17.973 [2024-12-13 13:04:58.534263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:17.973 [2024-12-13 13:04:58.534268] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:17.973 [2024-12-13 13:04:58.534273] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:17.973 [2024-12-13 13:04:58.534278] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:17.973 [2024-12-13 13:04:58.534291] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534295] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534299] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2083540) 00:20:17.973 [2024-12-13 13:04:58.534306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.973 [2024-12-13 13:04:58.534312] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2083540) 00:20:17.973 [2024-12-13 13:04:58.534326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.973 [2024-12-13 13:04:58.534349] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc7a0, cid 4, qid 0 00:20:17.973 [2024-12-13 13:04:58.534357] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc900, cid 5, qid 0 00:20:17.973 [2024-12-13 13:04:58.534422] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.973 [2024-12-13 13:04:58.534429] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.973 [2024-12-13 13:04:58.534433] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534437] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc7a0) on tqpair=0x2083540 00:20:17.973 [2024-12-13 13:04:58.534444] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.973 [2024-12-13 13:04:58.534450] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.973 [2024-12-13 13:04:58.534453] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534457] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc900) on tqpair=0x2083540 00:20:17.973 [2024-12-13 13:04:58.534468] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534472] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2083540) 00:20:17.973 [2024-12-13 13:04:58.534483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.973 [2024-12-13 13:04:58.534500] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc900, cid 5, qid 0 00:20:17.973 [2024-12-13 13:04:58.534555] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.973 [2024-12-13 13:04:58.534561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.973 [2024-12-13 13:04:58.534565] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534569] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc900) on tqpair=0x2083540 00:20:17.973 [2024-12-13 13:04:58.534580] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534584] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534587] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2083540) 00:20:17.973 [2024-12-13 13:04:58.534594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.973 [2024-12-13 13:04:58.534611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc900, cid 5, qid 0 00:20:17.973 [2024-12-13 13:04:58.534666] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.973 [2024-12-13 13:04:58.534673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.973 [2024-12-13 13:04:58.534676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534680] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc900) on tqpair=0x2083540 00:20:17.973 [2024-12-13 13:04:58.534691] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534699] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2083540) 00:20:17.973 [2024-12-13 13:04:58.534705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.973 [2024-12-13 13:04:58.534722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc900, cid 5, qid 0 00:20:17.973 [2024-12-13 13:04:58.534783] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.973 [2024-12-13 13:04:58.534792] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.973 [2024-12-13 13:04:58.534796] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534800] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc900) on tqpair=0x2083540 00:20:17.973 [2024-12-13 13:04:58.534814] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534818] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534822] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2083540) 00:20:17.973 [2024-12-13 13:04:58.534829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.973 [2024-12-13 13:04:58.534836] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534840] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2083540) 00:20:17.973 [2024-12-13 13:04:58.534850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.973 [2024-12-13 13:04:58.534856] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2083540) 00:20:17.973 [2024-12-13 13:04:58.534870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.973 [2024-12-13 13:04:58.534876] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.534884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2083540) 00:20:17.973 [2024-12-13 13:04:58.534890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.973 [2024-12-13 13:04:58.534910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc900, cid 5, qid 0 00:20:17.973 [2024-12-13 13:04:58.534917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc7a0, cid 4, qid 0 00:20:17.973 [2024-12-13 13:04:58.534922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bca60, cid 6, qid 0 00:20:17.973 [2024-12-13 13:04:58.534926] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bcbc0, cid 7, qid 0 00:20:17.973 [2024-12-13 13:04:58.535061] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.973 [2024-12-13 13:04:58.535067] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.973 [2024-12-13 13:04:58.535071] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535075] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2083540): datao=0, datal=8192, cccid=5 00:20:17.973 [2024-12-13 13:04:58.535079] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20bc900) on tqpair(0x2083540): expected_datao=0, payload_size=8192 00:20:17.973 [2024-12-13 13:04:58.535095] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535100] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.973 [2024-12-13 13:04:58.535121] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.973 [2024-12-13 13:04:58.535142] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535145] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2083540): datao=0, datal=512, cccid=4 00:20:17.973 [2024-12-13 13:04:58.535150] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20bc7a0) on tqpair(0x2083540): expected_datao=0, payload_size=512 00:20:17.973 [2024-12-13 13:04:58.535157] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535161] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.973 [2024-12-13 13:04:58.535173] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.973 [2024-12-13 13:04:58.535176] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535180] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2083540): datao=0, datal=512, cccid=6 00:20:17.973 [2024-12-13 13:04:58.535184] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20bca60) on tqpair(0x2083540): expected_datao=0, payload_size=512 00:20:17.973 [2024-12-13 13:04:58.535191] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535195] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535201] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.973 [2024-12-13 13:04:58.535206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.973 [2024-12-13 13:04:58.535210] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535214] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2083540): datao=0, datal=4096, cccid=7 00:20:17.973 [2024-12-13 13:04:58.535218] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20bcbc0) on tqpair(0x2083540): expected_datao=0, payload_size=4096 00:20:17.973 [2024-12-13 13:04:58.535226] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535229] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535238] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.973 [2024-12-13 13:04:58.535244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.973 [2024-12-13 13:04:58.535248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.973 [2024-12-13 13:04:58.535252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc900) on tqpair=0x2083540 00:20:17.973 [2024-12-13 13:04:58.535268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.973 [2024-12-13 13:04:58.535275] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.973 ===================================================== 00:20:17.973 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.973 ===================================================== 00:20:17.973 Controller Capabilities/Features 00:20:17.973 ================================ 00:20:17.973 Vendor ID: 8086 00:20:17.973 Subsystem Vendor ID: 8086 00:20:17.973 Serial Number: SPDK00000000000001 00:20:17.974 Model Number: SPDK bdev Controller 00:20:17.974 Firmware Version: 24.01.1 00:20:17.974 Recommended Arb Burst: 6 00:20:17.974 IEEE OUI Identifier: e4 d2 5c 00:20:17.974 Multi-path I/O 00:20:17.974 May have multiple subsystem ports: Yes 00:20:17.974 May have multiple controllers: Yes 00:20:17.974 Associated with SR-IOV VF: No 00:20:17.974 Max Data Transfer Size: 131072 00:20:17.974 Max Number of Namespaces: 32 00:20:17.974 Max Number of I/O Queues: 127 00:20:17.974 NVMe Specification Version (VS): 1.3 00:20:17.974 NVMe Specification Version (Identify): 1.3 00:20:17.974 Maximum Queue Entries: 128 00:20:17.974 Contiguous Queues Required: Yes 00:20:17.974 Arbitration Mechanisms Supported 00:20:17.974 Weighted Round Robin: Not Supported 00:20:17.974 Vendor Specific: Not Supported 00:20:17.974 Reset Timeout: 15000 ms 00:20:17.974 Doorbell Stride: 4 bytes 00:20:17.974 NVM Subsystem Reset: Not Supported 00:20:17.974 Command Sets Supported 00:20:17.974 NVM Command Set: Supported 00:20:17.974 Boot Partition: Not Supported 00:20:17.974 Memory Page Size Minimum: 4096 bytes 00:20:17.974 Memory Page Size Maximum: 4096 bytes 00:20:17.974 Persistent Memory Region: Not Supported 00:20:17.974 Optional Asynchronous Events Supported 00:20:17.974 Namespace Attribute Notices: Supported 00:20:17.974 Firmware Activation Notices: Not Supported 00:20:17.974 ANA Change Notices: Not Supported 00:20:17.974 PLE Aggregate Log Change Notices: Not Supported 00:20:17.974 LBA Status Info Alert Notices: Not Supported 00:20:17.974 EGE Aggregate Log Change Notices: Not Supported 00:20:17.974 Normal NVM Subsystem Shutdown event: Not Supported 00:20:17.974 Zone Descriptor Change Notices: Not Supported 00:20:17.974 Discovery Log Change Notices: Not Supported 00:20:17.974 Controller Attributes 00:20:17.974 128-bit Host Identifier: Supported 00:20:17.974 Non-Operational Permissive Mode: Not Supported 00:20:17.974 NVM Sets: Not Supported 00:20:17.974 Read Recovery Levels: Not Supported 00:20:17.974 Endurance Groups: Not Supported 00:20:17.974 Predictable Latency Mode: Not Supported 00:20:17.974 Traffic Based Keep ALive: Not Supported 00:20:17.974 Namespace Granularity: Not Supported 00:20:17.974 SQ Associations: Not Supported 00:20:17.974 UUID List: Not Supported 00:20:17.974 Multi-Domain Subsystem: Not Supported 00:20:17.974 Fixed Capacity Management: Not Supported 00:20:17.974 Variable Capacity Management: Not Supported 00:20:17.974 Delete Endurance Group: Not Supported 00:20:17.974 Delete NVM Set: Not Supported 00:20:17.974 Extended LBA Formats Supported: Not Supported 00:20:17.974 Flexible Data Placement Supported: Not Supported 00:20:17.974 00:20:17.974 Controller Memory Buffer Support 00:20:17.974 ================================ 00:20:17.974 Supported: No 00:20:17.974 00:20:17.974 Persistent Memory Region Support 00:20:17.974 ================================ 00:20:17.974 Supported: No 00:20:17.974 00:20:17.974 Admin Command Set Attributes 00:20:17.974 ============================ 00:20:17.974 Security Send/Receive: Not Supported 00:20:17.974 Format NVM: Not Supported 00:20:17.974 Firmware Activate/Download: Not Supported 00:20:17.974 Namespace Management: Not Supported 00:20:17.974 Device Self-Test: Not Supported 00:20:17.974 Directives: Not Supported 00:20:17.974 NVMe-MI: Not Supported 00:20:17.974 Virtualization Management: Not Supported 00:20:17.974 Doorbell Buffer Config: Not Supported 00:20:17.974 Get LBA Status Capability: Not Supported 00:20:17.974 Command & Feature Lockdown Capability: Not Supported 00:20:17.974 Abort Command Limit: 4 00:20:17.974 Async Event Request Limit: 4 00:20:17.974 Number of Firmware Slots: N/A 00:20:17.974 Firmware Slot 1 Read-Only: N/A 00:20:17.974 Firmware Activation Without Reset: [2024-12-13 13:04:58.535279] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.974 [2024-12-13 13:04:58.535283] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc7a0) on tqpair=0x2083540 00:20:17.974 [2024-12-13 13:04:58.535293] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.974 [2024-12-13 13:04:58.535300] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.974 [2024-12-13 13:04:58.535304] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.974 [2024-12-13 13:04:58.535307] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bca60) on tqpair=0x2083540 00:20:17.974 [2024-12-13 13:04:58.535315] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.974 [2024-12-13 13:04:58.535322] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.974 [2024-12-13 13:04:58.535325] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.974 [2024-12-13 13:04:58.535329] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bcbc0) on tqpair=0x2083540 00:20:17.974 N/A 00:20:17.974 Multiple Update Detection Support: N/A 00:20:17.974 Firmware Update Granularity: No Information Provided 00:20:17.974 Per-Namespace SMART Log: No 00:20:17.974 Asymmetric Namespace Access Log Page: Not Supported 00:20:17.974 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:17.974 Command Effects Log Page: Supported 00:20:17.974 Get Log Page Extended Data: Supported 00:20:17.974 Telemetry Log Pages: Not Supported 00:20:17.974 Persistent Event Log Pages: Not Supported 00:20:17.974 Supported Log Pages Log Page: May Support 00:20:17.974 Commands Supported & Effects Log Page: Not Supported 00:20:17.974 Feature Identifiers & Effects Log Page:May Support 00:20:17.974 NVMe-MI Commands & Effects Log Page: May Support 00:20:17.974 Data Area 4 for Telemetry Log: Not Supported 00:20:17.974 Error Log Page Entries Supported: 128 00:20:17.974 Keep Alive: Supported 00:20:17.974 Keep Alive Granularity: 10000 ms 00:20:17.974 00:20:17.974 NVM Command Set Attributes 00:20:17.974 ========================== 00:20:17.974 Submission Queue Entry Size 00:20:17.974 Max: 64 00:20:17.974 Min: 64 00:20:17.974 Completion Queue Entry Size 00:20:17.974 Max: 16 00:20:17.974 Min: 16 00:20:17.974 Number of Namespaces: 32 00:20:17.974 Compare Command: Supported 00:20:17.974 Write Uncorrectable Command: Not Supported 00:20:17.974 Dataset Management Command: Supported 00:20:17.974 Write Zeroes Command: Supported 00:20:17.974 Set Features Save Field: Not Supported 00:20:17.974 Reservations: Supported 00:20:17.974 Timestamp: Not Supported 00:20:17.974 Copy: Supported 00:20:17.974 Volatile Write Cache: Present 00:20:17.974 Atomic Write Unit (Normal): 1 00:20:17.974 Atomic Write Unit (PFail): 1 00:20:17.974 Atomic Compare & Write Unit: 1 00:20:17.974 Fused Compare & Write: Supported 00:20:17.974 Scatter-Gather List 00:20:17.974 SGL Command Set: Supported 00:20:17.974 SGL Keyed: Supported 00:20:17.974 SGL Bit Bucket Descriptor: Not Supported 00:20:17.974 SGL Metadata Pointer: Not Supported 00:20:17.974 Oversized SGL: Not Supported 00:20:17.974 SGL Metadata Address: Not Supported 00:20:17.974 SGL Offset: Supported 00:20:17.974 Transport SGL Data Block: Not Supported 00:20:17.974 Replay Protected Memory Block: Not Supported 00:20:17.974 00:20:17.974 Firmware Slot Information 00:20:17.974 ========================= 00:20:17.974 Active slot: 1 00:20:17.974 Slot 1 Firmware Revision: 24.01.1 00:20:17.974 00:20:17.974 00:20:17.974 Commands Supported and Effects 00:20:17.974 ============================== 00:20:17.974 Admin Commands 00:20:17.974 -------------- 00:20:17.974 Get Log Page (02h): Supported 00:20:17.974 Identify (06h): Supported 00:20:17.974 Abort (08h): Supported 00:20:17.974 Set Features (09h): Supported 00:20:17.974 Get Features (0Ah): Supported 00:20:17.974 Asynchronous Event Request (0Ch): Supported 00:20:17.974 Keep Alive (18h): Supported 00:20:17.974 I/O Commands 00:20:17.974 ------------ 00:20:17.974 Flush (00h): Supported LBA-Change 00:20:17.974 Write (01h): Supported LBA-Change 00:20:17.974 Read (02h): Supported 00:20:17.974 Compare (05h): Supported 00:20:17.974 Write Zeroes (08h): Supported LBA-Change 00:20:17.974 Dataset Management (09h): Supported LBA-Change 00:20:17.974 Copy (19h): Supported LBA-Change 00:20:17.974 Unknown (79h): Supported LBA-Change 00:20:17.974 Unknown (7Ah): Supported 00:20:17.974 00:20:17.974 Error Log 00:20:17.974 ========= 00:20:17.974 00:20:17.974 Arbitration 00:20:17.974 =========== 00:20:17.974 Arbitration Burst: 1 00:20:17.974 00:20:17.974 Power Management 00:20:17.974 ================ 00:20:17.974 Number of Power States: 1 00:20:17.974 Current Power State: Power State #0 00:20:17.974 Power State #0: 00:20:17.974 Max Power: 0.00 W 00:20:17.974 Non-Operational State: Operational 00:20:17.974 Entry Latency: Not Reported 00:20:17.974 Exit Latency: Not Reported 00:20:17.974 Relative Read Throughput: 0 00:20:17.974 Relative Read Latency: 0 00:20:17.974 Relative Write Throughput: 0 00:20:17.974 Relative Write Latency: 0 00:20:17.974 Idle Power: Not Reported 00:20:17.974 Active Power: Not Reported 00:20:17.974 Non-Operational Permissive Mode: Not Supported 00:20:17.974 00:20:17.974 Health Information 00:20:17.974 ================== 00:20:17.974 Critical Warnings: 00:20:17.974 Available Spare Space: OK 00:20:17.974 Temperature: OK 00:20:17.974 Device Reliability: OK 00:20:17.974 Read Only: No 00:20:17.974 Volatile Memory Backup: OK 00:20:17.974 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:17.975 Temperature Threshold: [2024-12-13 13:04:58.535437] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.535444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.535448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2083540) 00:20:17.975 [2024-12-13 13:04:58.535455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.975 [2024-12-13 13:04:58.535479] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bcbc0, cid 7, qid 0 00:20:17.975 [2024-12-13 13:04:58.535546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.975 [2024-12-13 13:04:58.535553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.975 [2024-12-13 13:04:58.535557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.535561] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bcbc0) on tqpair=0x2083540 00:20:17.975 [2024-12-13 13:04:58.535610] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:17.975 [2024-12-13 13:04:58.535622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.975 [2024-12-13 13:04:58.535628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.975 [2024-12-13 13:04:58.535634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.975 [2024-12-13 13:04:58.535640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.975 [2024-12-13 13:04:58.535649] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.535653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.535657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.975 [2024-12-13 13:04:58.535664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.975 [2024-12-13 13:04:58.535685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.975 [2024-12-13 13:04:58.535738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.975 [2024-12-13 13:04:58.535745] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.975 [2024-12-13 13:04:58.535749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.535753] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.975 [2024-12-13 13:04:58.535761] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.535765] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.535768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.975 [2024-12-13 13:04:58.539833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.975 [2024-12-13 13:04:58.539876] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.975 [2024-12-13 13:04:58.539958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.975 [2024-12-13 13:04:58.539965] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.975 [2024-12-13 13:04:58.539969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.539973] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.975 [2024-12-13 13:04:58.539979] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:17.975 [2024-12-13 13:04:58.539984] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:17.975 [2024-12-13 13:04:58.539994] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.539998] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540002] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.975 [2024-12-13 13:04:58.540010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.975 [2024-12-13 13:04:58.540028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.975 [2024-12-13 13:04:58.540086] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.975 [2024-12-13 13:04:58.540093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.975 [2024-12-13 13:04:58.540096] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.975 [2024-12-13 13:04:58.540111] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540116] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540120] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.975 [2024-12-13 13:04:58.540126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.975 [2024-12-13 13:04:58.540143] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.975 [2024-12-13 13:04:58.540206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.975 [2024-12-13 13:04:58.540221] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.975 [2024-12-13 13:04:58.540225] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.975 [2024-12-13 13:04:58.540241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540245] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540249] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.975 [2024-12-13 13:04:58.540256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.975 [2024-12-13 13:04:58.540275] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.975 [2024-12-13 13:04:58.540328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.975 [2024-12-13 13:04:58.540335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.975 [2024-12-13 13:04:58.540338] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540342] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.975 [2024-12-13 13:04:58.540353] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540357] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540361] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.975 [2024-12-13 13:04:58.540368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.975 [2024-12-13 13:04:58.540385] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.975 [2024-12-13 13:04:58.540438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.975 [2024-12-13 13:04:58.540444] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.975 [2024-12-13 13:04:58.540448] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540452] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.975 [2024-12-13 13:04:58.540463] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.975 [2024-12-13 13:04:58.540478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.975 [2024-12-13 13:04:58.540494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.975 [2024-12-13 13:04:58.540544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.975 [2024-12-13 13:04:58.540554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.975 [2024-12-13 13:04:58.540559] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540563] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.975 [2024-12-13 13:04:58.540574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.975 [2024-12-13 13:04:58.540589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.975 [2024-12-13 13:04:58.540606] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.975 [2024-12-13 13:04:58.540657] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.975 [2024-12-13 13:04:58.540668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.975 [2024-12-13 13:04:58.540672] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540676] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.975 [2024-12-13 13:04:58.540687] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540691] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.975 [2024-12-13 13:04:58.540695] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.540702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.540719] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.540783] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.540791] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.540796] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.540800] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.540811] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.540815] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.540819] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.540826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.540845] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.540898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.540909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.540913] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.540917] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.540928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.540932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.540936] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.540943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.540960] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.541010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.541017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.541021] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541025] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.541035] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541040] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.541050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.541067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.541115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.541122] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.541125] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541129] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.541140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.541154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.541171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.541219] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.541226] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.541229] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541233] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.541244] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541248] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541252] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.541259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.541275] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.541322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.541329] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.541333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541337] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.541347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541352] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541355] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.541362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.541379] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.541428] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.541435] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.541439] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.541453] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541457] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.541468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.541485] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.541534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.541545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.541549] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541553] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.541564] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541572] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.541579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.541597] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.541649] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.541660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.541664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541668] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.541679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541683] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541687] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.541694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.541711] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.541790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.541798] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.541802] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541806] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.541817] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541822] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541825] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.541833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.541852] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.541909] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.541920] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.541924] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541928] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.541940] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541944] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.541948] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.541955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.541973] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.542022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.976 [2024-12-13 13:04:58.542029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.976 [2024-12-13 13:04:58.542033] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.542036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.976 [2024-12-13 13:04:58.542047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.542052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.976 [2024-12-13 13:04:58.542056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.976 [2024-12-13 13:04:58.542063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.976 [2024-12-13 13:04:58.542080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.976 [2024-12-13 13:04:58.542145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.542152] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.542155] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542159] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.542170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542177] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.542184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.542201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.542256] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.542272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.542276] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542280] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.542292] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542296] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542300] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.542307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.542325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.542375] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.542386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.542390] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542394] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.542405] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542409] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542413] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.542420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.542437] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.542485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.542492] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.542496] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542499] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.542510] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542514] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542518] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.542525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.542542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.542597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.542609] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.542612] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.542627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542631] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542635] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.542642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.542659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.542712] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.542719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.542723] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542727] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.542737] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542751] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542756] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.542763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.542782] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.542834] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.542840] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.542844] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542848] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.542859] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542867] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.542873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.542890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.542943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.542950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.542953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.542968] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542972] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.542975] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.542982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.542999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.543051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.543058] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.543061] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.543075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543083] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.543090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.543107] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.543186] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.543193] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.543197] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543201] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.543215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.543231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.543250] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.543299] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.543306] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.543309] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543313] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.543324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543329] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543333] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.543340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.543357] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.543412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.543419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.543423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543427] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.977 [2024-12-13 13:04:58.543438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543442] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.977 [2024-12-13 13:04:58.543446] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.977 [2024-12-13 13:04:58.543453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.977 [2024-12-13 13:04:58.543470] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.977 [2024-12-13 13:04:58.543522] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.977 [2024-12-13 13:04:58.543528] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.977 [2024-12-13 13:04:58.543532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543536] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.543547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.543562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.543579] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.543643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.543649] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.543653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.543667] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543672] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543675] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.543682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.543699] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.543749] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.543755] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.543759] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543763] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.543784] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543794] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.543801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.543820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.543872] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.543882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.543887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.543902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543910] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.543917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.543935] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.543983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.543990] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.543994] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.543997] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.544008] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544016] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.544023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.544039] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.544097] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.544105] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.544109] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544113] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.544124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.544139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.544156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.544204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.544215] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.544219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.544234] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544239] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.544249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.544267] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.544320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.544326] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.544330] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544334] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.544344] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544349] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.544359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.544376] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.544431] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.544442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.544446] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544450] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.544461] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544469] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.544476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.544493] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.544544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.544551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.544555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.544569] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544577] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.544584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.544601] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.544651] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.544661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.544666] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.544681] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544685] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.544696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.544713] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.978 [2024-12-13 13:04:58.544791] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.978 [2024-12-13 13:04:58.544799] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.978 [2024-12-13 13:04:58.544804] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544807] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.978 [2024-12-13 13:04:58.544819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.978 [2024-12-13 13:04:58.544827] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.978 [2024-12-13 13:04:58.544835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.978 [2024-12-13 13:04:58.544854] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.544908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.544915] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.544919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.544923] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.544934] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.544939] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.544942] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.544949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.544967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.545024] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.545031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.545035] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545039] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.545049] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545054] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545058] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.545065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.545082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.545150] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.545160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.545165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.545180] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545184] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.545195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.545212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.545260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.545271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.545275] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.545290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545295] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545298] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.545305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.545334] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.545383] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.545390] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.545394] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545398] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.545409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545417] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.545424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.545441] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.545487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.545494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.545498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.545512] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.545527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.545544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.545613] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.545624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.545628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545633] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.545644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545648] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545652] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.545659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.545677] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.545726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.545737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.545750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545756] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.545783] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545788] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545792] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.545800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.545819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.545877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.545888] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.545893] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545897] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.545909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545914] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.545918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.545925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.545944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.545998] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.546009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.546013] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.546018] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.546029] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.546034] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.546038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.979 [2024-12-13 13:04:58.546045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.979 [2024-12-13 13:04:58.546064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.979 [2024-12-13 13:04:58.546131] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.979 [2024-12-13 13:04:58.546138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.979 [2024-12-13 13:04:58.546142] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.979 [2024-12-13 13:04:58.546145] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.979 [2024-12-13 13:04:58.546156] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546161] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546165] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.546172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.546189] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.546240] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.546247] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.546251] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546255] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.546266] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546270] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.546281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.546298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.546347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.546353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.546357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546361] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.546387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546395] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.546403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.546419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.546469] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.546475] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.546479] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.546493] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546498] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546502] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.546508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.546525] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.546575] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.546585] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.546589] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546593] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.546604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546612] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.546619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.546636] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.546684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.546695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.546699] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546703] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.546714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546719] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.546729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.546761] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.546841] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.546848] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.546852] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546856] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.546867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.546883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.546901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.546953] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.546964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.546969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546973] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.546984] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.546992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.546999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.547017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.547069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.547076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.547080] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547083] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.547094] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.547159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.547178] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.547233] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.547240] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.547244] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547248] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.547259] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.547274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.547292] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.547348] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.547355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.547359] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.547374] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547382] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.547390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.547407] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.547464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.547471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.547475] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547479] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.980 [2024-12-13 13:04:58.547489] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547494] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547498] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.980 [2024-12-13 13:04:58.547505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.980 [2024-12-13 13:04:58.547522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.980 [2024-12-13 13:04:58.547572] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.980 [2024-12-13 13:04:58.547582] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.980 [2024-12-13 13:04:58.547587] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.980 [2024-12-13 13:04:58.547591] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.981 [2024-12-13 13:04:58.547617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.981 [2024-12-13 13:04:58.547621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.981 [2024-12-13 13:04:58.547624] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.981 [2024-12-13 13:04:58.547631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.981 [2024-12-13 13:04:58.547649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.981 [2024-12-13 13:04:58.547698] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.981 [2024-12-13 13:04:58.547709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.981 [2024-12-13 13:04:58.547713] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.981 [2024-12-13 13:04:58.547717] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.981 [2024-12-13 13:04:58.547727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.981 [2024-12-13 13:04:58.547732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.981 [2024-12-13 13:04:58.547736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2083540) 00:20:17.981 [2024-12-13 13:04:58.554843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.981 [2024-12-13 13:04:58.554896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20bc640, cid 3, qid 0 00:20:17.981 [2024-12-13 13:04:58.554950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.981 [2024-12-13 13:04:58.554958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.981 [2024-12-13 13:04:58.554962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.981 [2024-12-13 13:04:58.554966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20bc640) on tqpair=0x2083540 00:20:17.981 [2024-12-13 13:04:58.554975] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 14 milliseconds 00:20:17.981 0 Kelvin (-273 Celsius) 00:20:17.981 Available Spare: 0% 00:20:17.981 Available Spare Threshold: 0% 00:20:17.981 Life Percentage Used: 0% 00:20:17.981 Data Units Read: 0 00:20:17.981 Data Units Written: 0 00:20:17.981 Host Read Commands: 0 00:20:17.981 Host Write Commands: 0 00:20:17.981 Controller Busy Time: 0 minutes 00:20:17.981 Power Cycles: 0 00:20:17.981 Power On Hours: 0 hours 00:20:17.981 Unsafe Shutdowns: 0 00:20:17.981 Unrecoverable Media Errors: 0 00:20:17.981 Lifetime Error Log Entries: 0 00:20:17.981 Warning Temperature Time: 0 minutes 00:20:17.981 Critical Temperature Time: 0 minutes 00:20:17.981 00:20:17.981 Number of Queues 00:20:17.981 ================ 00:20:17.981 Number of I/O Submission Queues: 127 00:20:17.981 Number of I/O Completion Queues: 127 00:20:17.981 00:20:17.981 Active Namespaces 00:20:17.981 ================= 00:20:17.981 Namespace ID:1 00:20:17.981 Error Recovery Timeout: Unlimited 00:20:17.981 Command Set Identifier: NVM (00h) 00:20:17.981 Deallocate: Supported 00:20:17.981 Deallocated/Unwritten Error: Not Supported 00:20:17.981 Deallocated Read Value: Unknown 00:20:17.981 Deallocate in Write Zeroes: Not Supported 00:20:17.981 Deallocated Guard Field: 0xFFFF 00:20:17.981 Flush: Supported 00:20:17.981 Reservation: Supported 00:20:17.981 Namespace Sharing Capabilities: Multiple Controllers 00:20:17.981 Size (in LBAs): 131072 (0GiB) 00:20:17.981 Capacity (in LBAs): 131072 (0GiB) 00:20:17.981 Utilization (in LBAs): 131072 (0GiB) 00:20:17.981 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:17.981 EUI64: ABCDEF0123456789 00:20:17.981 UUID: 1ddce64d-362e-4265-b846-c7d92ca0f104 00:20:17.981 Thin Provisioning: Not Supported 00:20:17.981 Per-NS Atomic Units: Yes 00:20:17.981 Atomic Boundary Size (Normal): 0 00:20:17.981 Atomic Boundary Size (PFail): 0 00:20:17.981 Atomic Boundary Offset: 0 00:20:17.981 Maximum Single Source Range Length: 65535 00:20:17.981 Maximum Copy Length: 65535 00:20:17.981 Maximum Source Range Count: 1 00:20:17.981 NGUID/EUI64 Never Reused: No 00:20:17.981 Namespace Write Protected: No 00:20:17.981 Number of LBA Formats: 1 00:20:17.981 Current LBA Format: LBA Format #00 00:20:17.981 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:17.981 00:20:17.981 13:04:58 -- host/identify.sh@51 -- # sync 00:20:17.981 13:04:58 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.981 13:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.981 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:17.981 13:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.981 13:04:58 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:17.981 13:04:58 -- host/identify.sh@56 -- # nvmftestfini 00:20:17.981 13:04:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:17.981 13:04:58 -- nvmf/common.sh@116 -- # sync 00:20:17.981 13:04:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:17.981 13:04:58 -- nvmf/common.sh@119 -- # set +e 00:20:17.981 13:04:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:17.981 13:04:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:17.981 rmmod nvme_tcp 00:20:17.981 rmmod nvme_fabrics 00:20:17.981 rmmod nvme_keyring 00:20:17.981 13:04:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:17.981 13:04:58 -- nvmf/common.sh@123 -- # set -e 00:20:17.981 13:04:58 -- nvmf/common.sh@124 -- # return 0 00:20:17.981 13:04:58 -- nvmf/common.sh@477 -- # '[' -n 93162 ']' 00:20:17.981 13:04:58 -- nvmf/common.sh@478 -- # killprocess 93162 00:20:17.981 13:04:58 -- common/autotest_common.sh@936 -- # '[' -z 93162 ']' 00:20:17.981 13:04:58 -- common/autotest_common.sh@940 -- # kill -0 93162 00:20:17.981 13:04:58 -- common/autotest_common.sh@941 -- # uname 00:20:17.981 13:04:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.981 13:04:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93162 00:20:17.981 13:04:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:17.981 13:04:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:17.981 killing process with pid 93162 00:20:17.981 13:04:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93162' 00:20:17.981 13:04:58 -- common/autotest_common.sh@955 -- # kill 93162 00:20:17.981 [2024-12-13 13:04:58.737149] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:17.981 13:04:58 -- common/autotest_common.sh@960 -- # wait 93162 00:20:18.241 13:04:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:18.241 13:04:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:18.241 13:04:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:18.241 13:04:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.241 13:04:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:18.241 13:04:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.241 13:04:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.241 13:04:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.241 13:04:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:18.241 00:20:18.241 real 0m2.696s 00:20:18.241 user 0m7.555s 00:20:18.241 sys 0m0.704s 00:20:18.241 13:04:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:18.241 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:18.241 ************************************ 00:20:18.241 END TEST nvmf_identify 00:20:18.241 ************************************ 00:20:18.500 13:04:59 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:18.500 13:04:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:18.500 13:04:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:18.500 13:04:59 -- common/autotest_common.sh@10 -- # set +x 00:20:18.500 ************************************ 00:20:18.500 START TEST nvmf_perf 00:20:18.500 ************************************ 00:20:18.500 13:04:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:18.500 * Looking for test storage... 00:20:18.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:18.500 13:04:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:18.500 13:04:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:18.500 13:04:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:18.500 13:04:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:18.500 13:04:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:18.500 13:04:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:18.500 13:04:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:18.500 13:04:59 -- scripts/common.sh@335 -- # IFS=.-: 00:20:18.500 13:04:59 -- scripts/common.sh@335 -- # read -ra ver1 00:20:18.500 13:04:59 -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.500 13:04:59 -- scripts/common.sh@336 -- # read -ra ver2 00:20:18.500 13:04:59 -- scripts/common.sh@337 -- # local 'op=<' 00:20:18.500 13:04:59 -- scripts/common.sh@339 -- # ver1_l=2 00:20:18.500 13:04:59 -- scripts/common.sh@340 -- # ver2_l=1 00:20:18.500 13:04:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:18.500 13:04:59 -- scripts/common.sh@343 -- # case "$op" in 00:20:18.500 13:04:59 -- scripts/common.sh@344 -- # : 1 00:20:18.500 13:04:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:18.500 13:04:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.500 13:04:59 -- scripts/common.sh@364 -- # decimal 1 00:20:18.500 13:04:59 -- scripts/common.sh@352 -- # local d=1 00:20:18.500 13:04:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.500 13:04:59 -- scripts/common.sh@354 -- # echo 1 00:20:18.500 13:04:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:18.500 13:04:59 -- scripts/common.sh@365 -- # decimal 2 00:20:18.500 13:04:59 -- scripts/common.sh@352 -- # local d=2 00:20:18.500 13:04:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.500 13:04:59 -- scripts/common.sh@354 -- # echo 2 00:20:18.500 13:04:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:18.500 13:04:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:18.500 13:04:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:18.500 13:04:59 -- scripts/common.sh@367 -- # return 0 00:20:18.500 13:04:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.500 13:04:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.500 --rc genhtml_branch_coverage=1 00:20:18.500 --rc genhtml_function_coverage=1 00:20:18.500 --rc genhtml_legend=1 00:20:18.500 --rc geninfo_all_blocks=1 00:20:18.500 --rc geninfo_unexecuted_blocks=1 00:20:18.500 00:20:18.500 ' 00:20:18.500 13:04:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.500 --rc genhtml_branch_coverage=1 00:20:18.500 --rc genhtml_function_coverage=1 00:20:18.500 --rc genhtml_legend=1 00:20:18.500 --rc geninfo_all_blocks=1 00:20:18.500 --rc geninfo_unexecuted_blocks=1 00:20:18.500 00:20:18.500 ' 00:20:18.500 13:04:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.500 --rc genhtml_branch_coverage=1 00:20:18.500 --rc genhtml_function_coverage=1 00:20:18.500 --rc genhtml_legend=1 00:20:18.500 --rc geninfo_all_blocks=1 00:20:18.500 --rc geninfo_unexecuted_blocks=1 00:20:18.500 00:20:18.500 ' 00:20:18.500 13:04:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.500 --rc genhtml_branch_coverage=1 00:20:18.500 --rc genhtml_function_coverage=1 00:20:18.500 --rc genhtml_legend=1 00:20:18.500 --rc geninfo_all_blocks=1 00:20:18.500 --rc geninfo_unexecuted_blocks=1 00:20:18.500 00:20:18.500 ' 00:20:18.500 13:04:59 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:18.500 13:04:59 -- nvmf/common.sh@7 -- # uname -s 00:20:18.500 13:04:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.500 13:04:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.500 13:04:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.500 13:04:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.500 13:04:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.500 13:04:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.500 13:04:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.500 13:04:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.500 13:04:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.500 13:04:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.500 13:04:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:18.500 13:04:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:20:18.500 13:04:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.500 13:04:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.500 13:04:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:18.500 13:04:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.500 13:04:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.500 13:04:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.500 13:04:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.500 13:04:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.500 13:04:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.500 13:04:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.500 13:04:59 -- paths/export.sh@5 -- # export PATH 00:20:18.500 13:04:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.500 13:04:59 -- nvmf/common.sh@46 -- # : 0 00:20:18.500 13:04:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:18.500 13:04:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:18.500 13:04:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:18.500 13:04:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.500 13:04:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.500 13:04:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:18.500 13:04:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:18.500 13:04:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:18.500 13:04:59 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:18.500 13:04:59 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:18.500 13:04:59 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.500 13:04:59 -- host/perf.sh@17 -- # nvmftestinit 00:20:18.500 13:04:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:18.500 13:04:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.500 13:04:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:18.500 13:04:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:18.500 13:04:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:18.500 13:04:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.500 13:04:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.500 13:04:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.500 13:04:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:18.500 13:04:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:18.500 13:04:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:18.500 13:04:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:18.500 13:04:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:18.500 13:04:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:18.500 13:04:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.500 13:04:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.500 13:04:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:18.500 13:04:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:18.500 13:04:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:18.500 13:04:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:18.500 13:04:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:18.500 13:04:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.500 13:04:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:18.500 13:04:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:18.500 13:04:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:18.500 13:04:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:18.500 13:04:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:18.500 13:04:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:18.500 Cannot find device "nvmf_tgt_br" 00:20:18.500 13:04:59 -- nvmf/common.sh@154 -- # true 00:20:18.500 13:04:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.759 Cannot find device "nvmf_tgt_br2" 00:20:18.759 13:04:59 -- nvmf/common.sh@155 -- # true 00:20:18.759 13:04:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:18.759 13:04:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:18.759 Cannot find device "nvmf_tgt_br" 00:20:18.759 13:04:59 -- nvmf/common.sh@157 -- # true 00:20:18.759 13:04:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:18.759 Cannot find device "nvmf_tgt_br2" 00:20:18.759 13:04:59 -- nvmf/common.sh@158 -- # true 00:20:18.759 13:04:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:18.759 13:04:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:18.759 13:04:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.759 13:04:59 -- nvmf/common.sh@161 -- # true 00:20:18.759 13:04:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.759 13:04:59 -- nvmf/common.sh@162 -- # true 00:20:18.759 13:04:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:18.759 13:04:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:18.759 13:04:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:18.759 13:04:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:18.759 13:04:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:18.759 13:04:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:18.759 13:04:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:18.759 13:04:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:18.759 13:04:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:18.759 13:04:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:18.759 13:04:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:18.759 13:04:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:18.759 13:04:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:18.759 13:04:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.759 13:04:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.759 13:04:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.759 13:04:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:18.759 13:04:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:18.759 13:04:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.759 13:04:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:19.018 13:04:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:19.018 13:04:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:19.018 13:04:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:19.018 13:04:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:19.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:19.018 00:20:19.018 --- 10.0.0.2 ping statistics --- 00:20:19.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.018 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:19.018 13:04:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:19.018 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:19.018 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:20:19.018 00:20:19.018 --- 10.0.0.3 ping statistics --- 00:20:19.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.018 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:19.018 13:04:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:19.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:19.018 00:20:19.018 --- 10.0.0.1 ping statistics --- 00:20:19.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.018 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:19.018 13:04:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.018 13:04:59 -- nvmf/common.sh@421 -- # return 0 00:20:19.018 13:04:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:19.018 13:04:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.018 13:04:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:19.018 13:04:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:19.018 13:04:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.018 13:04:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:19.018 13:04:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:19.018 13:04:59 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:19.018 13:04:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:19.018 13:04:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:19.018 13:04:59 -- common/autotest_common.sh@10 -- # set +x 00:20:19.018 13:04:59 -- nvmf/common.sh@469 -- # nvmfpid=93396 00:20:19.018 13:04:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:19.018 13:04:59 -- nvmf/common.sh@470 -- # waitforlisten 93396 00:20:19.018 13:04:59 -- common/autotest_common.sh@829 -- # '[' -z 93396 ']' 00:20:19.018 13:04:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.018 13:04:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.018 13:04:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.018 13:04:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.018 13:04:59 -- common/autotest_common.sh@10 -- # set +x 00:20:19.018 [2024-12-13 13:04:59.652370] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:19.018 [2024-12-13 13:04:59.652469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.018 [2024-12-13 13:04:59.791699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.277 [2024-12-13 13:04:59.854067] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:19.277 [2024-12-13 13:04:59.854228] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.277 [2024-12-13 13:04:59.854240] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.277 [2024-12-13 13:04:59.854248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.277 [2024-12-13 13:04:59.854867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.277 [2024-12-13 13:04:59.854943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.277 [2024-12-13 13:04:59.855023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.277 [2024-12-13 13:04:59.855026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.213 13:05:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.213 13:05:00 -- common/autotest_common.sh@862 -- # return 0 00:20:20.213 13:05:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:20.213 13:05:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:20.213 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:20:20.213 13:05:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.213 13:05:00 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:20.213 13:05:00 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:20.472 13:05:01 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:20.472 13:05:01 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:20.731 13:05:01 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:20.731 13:05:01 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:20.990 13:05:01 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:20.990 13:05:01 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:20.990 13:05:01 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:20.990 13:05:01 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:20.990 13:05:01 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.248 [2024-12-13 13:05:01.967854] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.248 13:05:01 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:21.508 13:05:02 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:21.508 13:05:02 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:21.766 13:05:02 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:21.766 13:05:02 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:22.026 13:05:02 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.285 [2024-12-13 13:05:02.982667] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.285 13:05:03 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:22.545 13:05:03 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:22.545 13:05:03 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:22.545 13:05:03 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:22.545 13:05:03 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:23.954 Initializing NVMe Controllers 00:20:23.954 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:23.955 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:23.955 Initialization complete. Launching workers. 00:20:23.955 ======================================================== 00:20:23.955 Latency(us) 00:20:23.955 Device Information : IOPS MiB/s Average min max 00:20:23.955 PCIE (0000:00:06.0) NSID 1 from core 0: 21536.00 84.12 1486.16 395.12 8308.96 00:20:23.955 ======================================================== 00:20:23.955 Total : 21536.00 84.12 1486.16 395.12 8308.96 00:20:23.955 00:20:23.955 13:05:04 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:25.331 Initializing NVMe Controllers 00:20:25.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:25.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:25.331 Initialization complete. Launching workers. 00:20:25.331 ======================================================== 00:20:25.331 Latency(us) 00:20:25.331 Device Information : IOPS MiB/s Average min max 00:20:25.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3329.15 13.00 300.05 103.69 7285.46 00:20:25.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 121.90 0.48 8203.53 5359.80 14083.96 00:20:25.331 ======================================================== 00:20:25.331 Total : 3451.04 13.48 579.21 103.69 14083.96 00:20:25.331 00:20:25.331 13:05:05 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:26.707 Initializing NVMe Controllers 00:20:26.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:26.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:26.707 Initialization complete. Launching workers. 00:20:26.707 ======================================================== 00:20:26.707 Latency(us) 00:20:26.707 Device Information : IOPS MiB/s Average min max 00:20:26.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9530.86 37.23 3358.81 581.75 9157.56 00:20:26.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2656.68 10.38 12147.56 5992.34 20315.31 00:20:26.707 ======================================================== 00:20:26.707 Total : 12187.54 47.61 5274.62 581.75 20315.31 00:20:26.707 00:20:26.707 13:05:07 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:26.707 13:05:07 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:29.241 Initializing NVMe Controllers 00:20:29.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:29.241 Controller IO queue size 128, less than required. 00:20:29.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:29.241 Controller IO queue size 128, less than required. 00:20:29.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:29.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:29.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:29.241 Initialization complete. Launching workers. 00:20:29.241 ======================================================== 00:20:29.241 Latency(us) 00:20:29.241 Device Information : IOPS MiB/s Average min max 00:20:29.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1840.99 460.25 70660.38 42801.35 134677.64 00:20:29.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 548.65 137.16 243606.71 102802.21 368690.49 00:20:29.241 ======================================================== 00:20:29.241 Total : 2389.64 597.41 110368.37 42801.35 368690.49 00:20:29.241 00:20:29.241 13:05:09 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:29.241 No valid NVMe controllers or AIO or URING devices found 00:20:29.241 Initializing NVMe Controllers 00:20:29.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:29.241 Controller IO queue size 128, less than required. 00:20:29.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:29.241 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:29.241 Controller IO queue size 128, less than required. 00:20:29.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:29.241 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:29.241 WARNING: Some requested NVMe devices were skipped 00:20:29.241 13:05:09 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:31.775 Initializing NVMe Controllers 00:20:31.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.775 Controller IO queue size 128, less than required. 00:20:31.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.775 Controller IO queue size 128, less than required. 00:20:31.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:31.775 Initialization complete. Launching workers. 00:20:31.775 00:20:31.775 ==================== 00:20:31.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:31.775 TCP transport: 00:20:31.775 polls: 8420 00:20:31.775 idle_polls: 5157 00:20:31.775 sock_completions: 3263 00:20:31.775 nvme_completions: 4006 00:20:31.775 submitted_requests: 6162 00:20:31.775 queued_requests: 1 00:20:31.775 00:20:31.775 ==================== 00:20:31.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:31.775 TCP transport: 00:20:31.775 polls: 7833 00:20:31.775 idle_polls: 4885 00:20:31.775 sock_completions: 2948 00:20:31.775 nvme_completions: 5662 00:20:31.775 submitted_requests: 8548 00:20:31.775 queued_requests: 1 00:20:31.775 ======================================================== 00:20:31.775 Latency(us) 00:20:31.775 Device Information : IOPS MiB/s Average min max 00:20:31.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1064.37 266.09 123837.12 89264.51 191668.98 00:20:31.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1477.63 369.41 87707.82 48677.21 147985.28 00:20:31.775 ======================================================== 00:20:31.775 Total : 2542.00 635.50 102835.68 48677.21 191668.98 00:20:31.775 00:20:31.775 13:05:12 -- host/perf.sh@66 -- # sync 00:20:31.775 13:05:12 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.034 13:05:12 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:32.034 13:05:12 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:32.034 13:05:12 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:32.293 13:05:12 -- host/perf.sh@72 -- # ls_guid=a7d60ca8-e0da-422e-a02f-473c518d4f5d 00:20:32.293 13:05:12 -- host/perf.sh@73 -- # get_lvs_free_mb a7d60ca8-e0da-422e-a02f-473c518d4f5d 00:20:32.293 13:05:12 -- common/autotest_common.sh@1353 -- # local lvs_uuid=a7d60ca8-e0da-422e-a02f-473c518d4f5d 00:20:32.293 13:05:12 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:32.293 13:05:12 -- common/autotest_common.sh@1355 -- # local fc 00:20:32.293 13:05:12 -- common/autotest_common.sh@1356 -- # local cs 00:20:32.293 13:05:12 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:32.551 13:05:13 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:32.551 { 00:20:32.551 "base_bdev": "Nvme0n1", 00:20:32.551 "block_size": 4096, 00:20:32.551 "cluster_size": 4194304, 00:20:32.551 "free_clusters": 1278, 00:20:32.551 "name": "lvs_0", 00:20:32.551 "total_data_clusters": 1278, 00:20:32.551 "uuid": "a7d60ca8-e0da-422e-a02f-473c518d4f5d" 00:20:32.551 } 00:20:32.551 ]' 00:20:32.551 13:05:13 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="a7d60ca8-e0da-422e-a02f-473c518d4f5d") .free_clusters' 00:20:32.551 13:05:13 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:32.551 13:05:13 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="a7d60ca8-e0da-422e-a02f-473c518d4f5d") .cluster_size' 00:20:32.551 5112 00:20:32.551 13:05:13 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:32.551 13:05:13 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:32.551 13:05:13 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:32.551 13:05:13 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:32.551 13:05:13 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a7d60ca8-e0da-422e-a02f-473c518d4f5d lbd_0 5112 00:20:33.118 13:05:13 -- host/perf.sh@80 -- # lb_guid=5b69de7d-e36e-40aa-8686-1c06803743fa 00:20:33.118 13:05:13 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 5b69de7d-e36e-40aa-8686-1c06803743fa lvs_n_0 00:20:33.376 13:05:14 -- host/perf.sh@83 -- # ls_nested_guid=c9e65074-1843-42f9-98eb-c155da8ac540 00:20:33.376 13:05:14 -- host/perf.sh@84 -- # get_lvs_free_mb c9e65074-1843-42f9-98eb-c155da8ac540 00:20:33.376 13:05:14 -- common/autotest_common.sh@1353 -- # local lvs_uuid=c9e65074-1843-42f9-98eb-c155da8ac540 00:20:33.376 13:05:14 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:33.376 13:05:14 -- common/autotest_common.sh@1355 -- # local fc 00:20:33.376 13:05:14 -- common/autotest_common.sh@1356 -- # local cs 00:20:33.376 13:05:14 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:33.635 13:05:14 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:33.635 { 00:20:33.635 "base_bdev": "Nvme0n1", 00:20:33.635 "block_size": 4096, 00:20:33.635 "cluster_size": 4194304, 00:20:33.635 "free_clusters": 0, 00:20:33.635 "name": "lvs_0", 00:20:33.635 "total_data_clusters": 1278, 00:20:33.635 "uuid": "a7d60ca8-e0da-422e-a02f-473c518d4f5d" 00:20:33.635 }, 00:20:33.635 { 00:20:33.635 "base_bdev": "5b69de7d-e36e-40aa-8686-1c06803743fa", 00:20:33.635 "block_size": 4096, 00:20:33.635 "cluster_size": 4194304, 00:20:33.635 "free_clusters": 1276, 00:20:33.635 "name": "lvs_n_0", 00:20:33.635 "total_data_clusters": 1276, 00:20:33.635 "uuid": "c9e65074-1843-42f9-98eb-c155da8ac540" 00:20:33.635 } 00:20:33.635 ]' 00:20:33.635 13:05:14 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="c9e65074-1843-42f9-98eb-c155da8ac540") .free_clusters' 00:20:33.635 13:05:14 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:33.635 13:05:14 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="c9e65074-1843-42f9-98eb-c155da8ac540") .cluster_size' 00:20:33.893 5104 00:20:33.893 13:05:14 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:33.893 13:05:14 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:33.893 13:05:14 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:33.893 13:05:14 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:33.894 13:05:14 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c9e65074-1843-42f9-98eb-c155da8ac540 lbd_nest_0 5104 00:20:34.152 13:05:14 -- host/perf.sh@88 -- # lb_nested_guid=57f9a99c-1b69-4fae-bf06-9b2ecbbea6dd 00:20:34.152 13:05:14 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:34.411 13:05:15 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:34.411 13:05:15 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 57f9a99c-1b69-4fae-bf06-9b2ecbbea6dd 00:20:34.668 13:05:15 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:34.926 13:05:15 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:34.926 13:05:15 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:34.926 13:05:15 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:34.926 13:05:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:34.926 13:05:15 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.184 No valid NVMe controllers or AIO or URING devices found 00:20:35.184 Initializing NVMe Controllers 00:20:35.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.184 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:35.184 WARNING: Some requested NVMe devices were skipped 00:20:35.184 13:05:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:35.184 13:05:15 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.388 Initializing NVMe Controllers 00:20:47.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.388 Initialization complete. Launching workers. 00:20:47.388 ======================================================== 00:20:47.388 Latency(us) 00:20:47.388 Device Information : IOPS MiB/s Average min max 00:20:47.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 794.10 99.26 1258.39 386.60 8377.38 00:20:47.388 ======================================================== 00:20:47.388 Total : 794.10 99.26 1258.39 386.60 8377.38 00:20:47.388 00:20:47.388 13:05:26 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:47.388 13:05:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:47.388 13:05:26 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.388 No valid NVMe controllers or AIO or URING devices found 00:20:47.388 Initializing NVMe Controllers 00:20:47.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.388 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:47.388 WARNING: Some requested NVMe devices were skipped 00:20:47.388 13:05:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:47.388 13:05:26 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.373 Initializing NVMe Controllers 00:20:57.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:57.373 Initialization complete. Launching workers. 00:20:57.373 ======================================================== 00:20:57.373 Latency(us) 00:20:57.373 Device Information : IOPS MiB/s Average min max 00:20:57.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1089.20 136.15 29423.75 8092.23 266966.53 00:20:57.373 ======================================================== 00:20:57.373 Total : 1089.20 136.15 29423.75 8092.23 266966.53 00:20:57.373 00:20:57.373 13:05:36 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:57.373 13:05:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:57.373 13:05:36 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.373 No valid NVMe controllers or AIO or URING devices found 00:20:57.373 Initializing NVMe Controllers 00:20:57.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.373 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:57.373 WARNING: Some requested NVMe devices were skipped 00:20:57.373 13:05:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:57.373 13:05:37 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:07.354 Initializing NVMe Controllers 00:21:07.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.354 Controller IO queue size 128, less than required. 00:21:07.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:07.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:07.354 Initialization complete. Launching workers. 00:21:07.354 ======================================================== 00:21:07.354 Latency(us) 00:21:07.354 Device Information : IOPS MiB/s Average min max 00:21:07.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3033.11 379.14 42294.04 16211.59 85567.37 00:21:07.354 ======================================================== 00:21:07.354 Total : 3033.11 379.14 42294.04 16211.59 85567.37 00:21:07.354 00:21:07.354 13:05:47 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.354 13:05:47 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 57f9a99c-1b69-4fae-bf06-9b2ecbbea6dd 00:21:07.354 13:05:48 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:07.922 13:05:48 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5b69de7d-e36e-40aa-8686-1c06803743fa 00:21:08.180 13:05:48 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:08.439 13:05:49 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:08.439 13:05:49 -- host/perf.sh@114 -- # nvmftestfini 00:21:08.439 13:05:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:08.439 13:05:49 -- nvmf/common.sh@116 -- # sync 00:21:08.439 13:05:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:08.440 13:05:49 -- nvmf/common.sh@119 -- # set +e 00:21:08.440 13:05:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:08.440 13:05:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:08.440 rmmod nvme_tcp 00:21:08.440 rmmod nvme_fabrics 00:21:08.440 rmmod nvme_keyring 00:21:08.440 13:05:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:08.440 13:05:49 -- nvmf/common.sh@123 -- # set -e 00:21:08.440 13:05:49 -- nvmf/common.sh@124 -- # return 0 00:21:08.440 13:05:49 -- nvmf/common.sh@477 -- # '[' -n 93396 ']' 00:21:08.440 13:05:49 -- nvmf/common.sh@478 -- # killprocess 93396 00:21:08.440 13:05:49 -- common/autotest_common.sh@936 -- # '[' -z 93396 ']' 00:21:08.440 13:05:49 -- common/autotest_common.sh@940 -- # kill -0 93396 00:21:08.440 13:05:49 -- common/autotest_common.sh@941 -- # uname 00:21:08.440 13:05:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:08.440 13:05:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93396 00:21:08.440 killing process with pid 93396 00:21:08.440 13:05:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:08.440 13:05:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:08.440 13:05:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93396' 00:21:08.440 13:05:49 -- common/autotest_common.sh@955 -- # kill 93396 00:21:08.440 13:05:49 -- common/autotest_common.sh@960 -- # wait 93396 00:21:09.386 13:05:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:09.386 13:05:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:09.386 13:05:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:09.386 13:05:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.386 13:05:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:09.386 13:05:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.386 13:05:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.386 13:05:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.386 13:05:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:09.386 ************************************ 00:21:09.386 END TEST nvmf_perf 00:21:09.386 ************************************ 00:21:09.386 00:21:09.386 real 0m50.943s 00:21:09.386 user 3m12.913s 00:21:09.386 sys 0m10.534s 00:21:09.386 13:05:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:09.386 13:05:49 -- common/autotest_common.sh@10 -- # set +x 00:21:09.386 13:05:50 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:09.386 13:05:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:09.386 13:05:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:09.386 13:05:50 -- common/autotest_common.sh@10 -- # set +x 00:21:09.386 ************************************ 00:21:09.386 START TEST nvmf_fio_host 00:21:09.386 ************************************ 00:21:09.386 13:05:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:09.386 * Looking for test storage... 00:21:09.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:09.386 13:05:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:09.386 13:05:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:09.386 13:05:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:09.682 13:05:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:09.682 13:05:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:09.682 13:05:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:09.682 13:05:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:09.682 13:05:50 -- scripts/common.sh@335 -- # IFS=.-: 00:21:09.682 13:05:50 -- scripts/common.sh@335 -- # read -ra ver1 00:21:09.682 13:05:50 -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.682 13:05:50 -- scripts/common.sh@336 -- # read -ra ver2 00:21:09.682 13:05:50 -- scripts/common.sh@337 -- # local 'op=<' 00:21:09.682 13:05:50 -- scripts/common.sh@339 -- # ver1_l=2 00:21:09.682 13:05:50 -- scripts/common.sh@340 -- # ver2_l=1 00:21:09.682 13:05:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:09.682 13:05:50 -- scripts/common.sh@343 -- # case "$op" in 00:21:09.682 13:05:50 -- scripts/common.sh@344 -- # : 1 00:21:09.682 13:05:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:09.682 13:05:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.682 13:05:50 -- scripts/common.sh@364 -- # decimal 1 00:21:09.682 13:05:50 -- scripts/common.sh@352 -- # local d=1 00:21:09.682 13:05:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.682 13:05:50 -- scripts/common.sh@354 -- # echo 1 00:21:09.682 13:05:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:09.682 13:05:50 -- scripts/common.sh@365 -- # decimal 2 00:21:09.682 13:05:50 -- scripts/common.sh@352 -- # local d=2 00:21:09.682 13:05:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.682 13:05:50 -- scripts/common.sh@354 -- # echo 2 00:21:09.682 13:05:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:09.682 13:05:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:09.682 13:05:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:09.682 13:05:50 -- scripts/common.sh@367 -- # return 0 00:21:09.682 13:05:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.682 13:05:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:09.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.682 --rc genhtml_branch_coverage=1 00:21:09.682 --rc genhtml_function_coverage=1 00:21:09.682 --rc genhtml_legend=1 00:21:09.682 --rc geninfo_all_blocks=1 00:21:09.682 --rc geninfo_unexecuted_blocks=1 00:21:09.682 00:21:09.682 ' 00:21:09.682 13:05:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:09.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.682 --rc genhtml_branch_coverage=1 00:21:09.682 --rc genhtml_function_coverage=1 00:21:09.682 --rc genhtml_legend=1 00:21:09.682 --rc geninfo_all_blocks=1 00:21:09.682 --rc geninfo_unexecuted_blocks=1 00:21:09.682 00:21:09.682 ' 00:21:09.682 13:05:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:09.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.682 --rc genhtml_branch_coverage=1 00:21:09.682 --rc genhtml_function_coverage=1 00:21:09.682 --rc genhtml_legend=1 00:21:09.682 --rc geninfo_all_blocks=1 00:21:09.682 --rc geninfo_unexecuted_blocks=1 00:21:09.682 00:21:09.682 ' 00:21:09.682 13:05:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:09.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.682 --rc genhtml_branch_coverage=1 00:21:09.682 --rc genhtml_function_coverage=1 00:21:09.682 --rc genhtml_legend=1 00:21:09.682 --rc geninfo_all_blocks=1 00:21:09.682 --rc geninfo_unexecuted_blocks=1 00:21:09.682 00:21:09.682 ' 00:21:09.682 13:05:50 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.682 13:05:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.682 13:05:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.682 13:05:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.682 13:05:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.682 13:05:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.682 13:05:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.682 13:05:50 -- paths/export.sh@5 -- # export PATH 00:21:09.682 13:05:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.682 13:05:50 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:09.682 13:05:50 -- nvmf/common.sh@7 -- # uname -s 00:21:09.682 13:05:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.682 13:05:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.682 13:05:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.682 13:05:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.682 13:05:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.682 13:05:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.682 13:05:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.682 13:05:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.682 13:05:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.682 13:05:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.682 13:05:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:21:09.682 13:05:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:21:09.682 13:05:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.682 13:05:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.682 13:05:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:09.682 13:05:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.682 13:05:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.682 13:05:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.682 13:05:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.682 13:05:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.682 13:05:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.682 13:05:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.682 13:05:50 -- paths/export.sh@5 -- # export PATH 00:21:09.682 13:05:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.682 13:05:50 -- nvmf/common.sh@46 -- # : 0 00:21:09.682 13:05:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:09.682 13:05:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:09.682 13:05:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:09.682 13:05:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.682 13:05:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.683 13:05:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:09.683 13:05:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:09.683 13:05:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:09.683 13:05:50 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:09.683 13:05:50 -- host/fio.sh@14 -- # nvmftestinit 00:21:09.683 13:05:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:09.683 13:05:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.683 13:05:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:09.683 13:05:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:09.683 13:05:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:09.683 13:05:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.683 13:05:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.683 13:05:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.683 13:05:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:09.683 13:05:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:09.683 13:05:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:09.683 13:05:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:09.683 13:05:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:09.683 13:05:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:09.683 13:05:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.683 13:05:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.683 13:05:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:09.683 13:05:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:09.683 13:05:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:09.683 13:05:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:09.683 13:05:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:09.683 13:05:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.683 13:05:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:09.683 13:05:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:09.683 13:05:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:09.683 13:05:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:09.683 13:05:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:09.683 13:05:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:09.683 Cannot find device "nvmf_tgt_br" 00:21:09.683 13:05:50 -- nvmf/common.sh@154 -- # true 00:21:09.683 13:05:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:09.683 Cannot find device "nvmf_tgt_br2" 00:21:09.683 13:05:50 -- nvmf/common.sh@155 -- # true 00:21:09.683 13:05:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:09.683 13:05:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:09.683 Cannot find device "nvmf_tgt_br" 00:21:09.683 13:05:50 -- nvmf/common.sh@157 -- # true 00:21:09.683 13:05:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:09.683 Cannot find device "nvmf_tgt_br2" 00:21:09.683 13:05:50 -- nvmf/common.sh@158 -- # true 00:21:09.683 13:05:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:09.683 13:05:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:09.683 13:05:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:09.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.683 13:05:50 -- nvmf/common.sh@161 -- # true 00:21:09.683 13:05:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:09.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.683 13:05:50 -- nvmf/common.sh@162 -- # true 00:21:09.683 13:05:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:09.683 13:05:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:09.683 13:05:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:09.683 13:05:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:09.683 13:05:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:09.683 13:05:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:09.956 13:05:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:09.956 13:05:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:09.956 13:05:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:09.956 13:05:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:09.956 13:05:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:09.956 13:05:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:09.956 13:05:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:09.956 13:05:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:09.956 13:05:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:09.956 13:05:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:09.956 13:05:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:09.956 13:05:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:09.956 13:05:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:09.956 13:05:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:09.956 13:05:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:09.956 13:05:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:09.956 13:05:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:09.956 13:05:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:09.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:21:09.956 00:21:09.956 --- 10.0.0.2 ping statistics --- 00:21:09.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.956 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:09.956 13:05:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:09.956 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:09.956 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:21:09.956 00:21:09.956 --- 10.0.0.3 ping statistics --- 00:21:09.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.956 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:09.956 13:05:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:09.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:09.956 00:21:09.956 --- 10.0.0.1 ping statistics --- 00:21:09.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.957 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:09.957 13:05:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.957 13:05:50 -- nvmf/common.sh@421 -- # return 0 00:21:09.957 13:05:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:09.957 13:05:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.957 13:05:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:09.957 13:05:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:09.957 13:05:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.957 13:05:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:09.957 13:05:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:09.957 13:05:50 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:09.957 13:05:50 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:09.957 13:05:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:09.957 13:05:50 -- common/autotest_common.sh@10 -- # set +x 00:21:09.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.957 13:05:50 -- host/fio.sh@24 -- # nvmfpid=94371 00:21:09.957 13:05:50 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:09.957 13:05:50 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.957 13:05:50 -- host/fio.sh@28 -- # waitforlisten 94371 00:21:09.957 13:05:50 -- common/autotest_common.sh@829 -- # '[' -z 94371 ']' 00:21:09.957 13:05:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.957 13:05:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.957 13:05:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.957 13:05:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.957 13:05:50 -- common/autotest_common.sh@10 -- # set +x 00:21:09.957 [2024-12-13 13:05:50.641939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:09.957 [2024-12-13 13:05:50.642027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.214 [2024-12-13 13:05:50.783460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.214 [2024-12-13 13:05:50.851778] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:10.214 [2024-12-13 13:05:50.851925] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.214 [2024-12-13 13:05:50.851937] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.214 [2024-12-13 13:05:50.851945] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.214 [2024-12-13 13:05:50.852013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.214 [2024-12-13 13:05:50.852612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.214 [2024-12-13 13:05:50.852722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.214 [2024-12-13 13:05:50.852731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.149 13:05:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.149 13:05:51 -- common/autotest_common.sh@862 -- # return 0 00:21:11.149 13:05:51 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:11.149 [2024-12-13 13:05:51.899863] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.408 13:05:51 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:11.408 13:05:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.408 13:05:51 -- common/autotest_common.sh@10 -- # set +x 00:21:11.408 13:05:51 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:11.667 Malloc1 00:21:11.667 13:05:52 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.926 13:05:52 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:12.185 13:05:52 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.443 [2024-12-13 13:05:53.053390] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.443 13:05:53 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:12.702 13:05:53 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:12.702 13:05:53 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:12.702 13:05:53 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:12.702 13:05:53 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:12.702 13:05:53 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:12.702 13:05:53 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:12.702 13:05:53 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:12.702 13:05:53 -- common/autotest_common.sh@1330 -- # shift 00:21:12.702 13:05:53 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:12.702 13:05:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.702 13:05:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:12.702 13:05:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:12.702 13:05:53 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:12.702 13:05:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:12.702 13:05:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:12.702 13:05:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.702 13:05:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:12.702 13:05:53 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:12.702 13:05:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:12.702 13:05:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:12.702 13:05:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:12.702 13:05:53 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:12.702 13:05:53 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:12.961 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:12.961 fio-3.35 00:21:12.961 Starting 1 thread 00:21:15.494 00:21:15.494 test: (groupid=0, jobs=1): err= 0: pid=94502: Fri Dec 13 13:05:55 2024 00:21:15.494 read: IOPS=9660, BW=37.7MiB/s (39.6MB/s)(75.7MiB/2007msec) 00:21:15.494 slat (nsec): min=1725, max=211436, avg=2348.50, stdev=2597.21 00:21:15.494 clat (usec): min=2367, max=13538, avg=7019.13, stdev=672.61 00:21:15.494 lat (usec): min=2402, max=13540, avg=7021.48, stdev=672.51 00:21:15.494 clat percentiles (usec): 00:21:15.494 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:21:15.494 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:21:15.494 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8160], 00:21:15.495 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[11338], 00:21:15.495 | 99.99th=[13566] 00:21:15.495 bw ( KiB/s): min=38344, max=39032, per=100.00%, avg=38644.00, stdev=328.84, samples=4 00:21:15.495 iops : min= 9586, max= 9758, avg=9661.00, stdev=82.21, samples=4 00:21:15.495 write: IOPS=9666, BW=37.8MiB/s (39.6MB/s)(75.8MiB/2007msec); 0 zone resets 00:21:15.495 slat (nsec): min=1817, max=173745, avg=2493.68, stdev=2278.28 00:21:15.495 clat (usec): min=1469, max=12630, avg=6150.54, stdev=568.41 00:21:15.495 lat (usec): min=1477, max=12632, avg=6153.03, stdev=568.41 00:21:15.495 clat percentiles (usec): 00:21:15.495 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:21:15.495 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:21:15.495 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6783], 95.00th=[ 7046], 00:21:15.495 | 99.00th=[ 7570], 99.50th=[ 7963], 99.90th=[10683], 99.95th=[11338], 00:21:15.495 | 99.99th=[12518] 00:21:15.495 bw ( KiB/s): min=37760, max=39304, per=100.00%, avg=38680.00, stdev=654.08, samples=4 00:21:15.495 iops : min= 9440, max= 9826, avg=9670.00, stdev=163.52, samples=4 00:21:15.495 lat (msec) : 2=0.03%, 4=0.14%, 10=99.67%, 20=0.16% 00:21:15.495 cpu : usr=67.05%, sys=23.83%, ctx=6, majf=0, minf=5 00:21:15.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:15.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:15.495 issued rwts: total=19388,19401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:15.495 00:21:15.495 Run status group 0 (all jobs): 00:21:15.495 READ: bw=37.7MiB/s (39.6MB/s), 37.7MiB/s-37.7MiB/s (39.6MB/s-39.6MB/s), io=75.7MiB (79.4MB), run=2007-2007msec 00:21:15.495 WRITE: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=75.8MiB (79.5MB), run=2007-2007msec 00:21:15.495 13:05:55 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:15.495 13:05:55 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:15.495 13:05:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:15.495 13:05:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:15.495 13:05:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:15.495 13:05:55 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:15.495 13:05:55 -- common/autotest_common.sh@1330 -- # shift 00:21:15.495 13:05:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:15.495 13:05:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.495 13:05:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:15.495 13:05:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:15.495 13:05:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:15.495 13:05:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:15.495 13:05:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:15.495 13:05:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.495 13:05:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:15.495 13:05:55 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:15.495 13:05:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:15.495 13:05:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:15.495 13:05:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:15.495 13:05:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:15.495 13:05:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:15.495 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:15.495 fio-3.35 00:21:15.495 Starting 1 thread 00:21:18.030 00:21:18.030 test: (groupid=0, jobs=1): err= 0: pid=94551: Fri Dec 13 13:05:58 2024 00:21:18.030 read: IOPS=8792, BW=137MiB/s (144MB/s)(276MiB/2007msec) 00:21:18.030 slat (usec): min=2, max=129, avg= 3.42, stdev= 2.57 00:21:18.030 clat (usec): min=2417, max=18230, avg=8748.47, stdev=2408.32 00:21:18.030 lat (usec): min=2420, max=18234, avg=8751.89, stdev=2408.42 00:21:18.030 clat percentiles (usec): 00:21:18.030 | 1.00th=[ 4293], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6587], 00:21:18.030 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9241], 00:21:18.030 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11600], 95.00th=[12911], 00:21:18.030 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17695], 99.95th=[17695], 00:21:18.030 | 99.99th=[17957] 00:21:18.030 bw ( KiB/s): min=64448, max=85376, per=50.55%, avg=71112.00, stdev=9639.89, samples=4 00:21:18.030 iops : min= 4028, max= 5336, avg=4444.50, stdev=602.49, samples=4 00:21:18.030 write: IOPS=5187, BW=81.1MiB/s (85.0MB/s)(145MiB/1784msec); 0 zone resets 00:21:18.030 slat (usec): min=30, max=331, avg=34.69, stdev= 8.40 00:21:18.030 clat (usec): min=3347, max=18308, avg=10269.55, stdev=2000.54 00:21:18.030 lat (usec): min=3377, max=18341, avg=10304.24, stdev=2000.96 00:21:18.030 clat percentiles (usec): 00:21:18.030 | 1.00th=[ 6456], 5.00th=[ 7504], 10.00th=[ 7963], 20.00th=[ 8586], 00:21:18.030 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10552], 00:21:18.030 | 70.00th=[11076], 80.00th=[11863], 90.00th=[12911], 95.00th=[13960], 00:21:18.030 | 99.00th=[16188], 99.50th=[16712], 99.90th=[18220], 99.95th=[18220], 00:21:18.030 | 99.99th=[18220] 00:21:18.030 bw ( KiB/s): min=67392, max=88672, per=89.20%, avg=74040.00, stdev=9854.66, samples=4 00:21:18.030 iops : min= 4212, max= 5542, avg=4627.50, stdev=615.92, samples=4 00:21:18.030 lat (msec) : 4=0.41%, 10=61.56%, 20=38.04% 00:21:18.030 cpu : usr=73.68%, sys=17.00%, ctx=5, majf=0, minf=1 00:21:18.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:18.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:18.031 issued rwts: total=17646,9255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:18.031 00:21:18.031 Run status group 0 (all jobs): 00:21:18.031 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=276MiB (289MB), run=2007-2007msec 00:21:18.031 WRITE: bw=81.1MiB/s (85.0MB/s), 81.1MiB/s-81.1MiB/s (85.0MB/s-85.0MB/s), io=145MiB (152MB), run=1784-1784msec 00:21:18.031 13:05:58 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.031 13:05:58 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:18.031 13:05:58 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:18.031 13:05:58 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:18.031 13:05:58 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:18.031 13:05:58 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:18.031 13:05:58 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:18.031 13:05:58 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:18.031 13:05:58 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:18.031 13:05:58 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:18.031 13:05:58 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:18.031 13:05:58 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:18.289 Nvme0n1 00:21:18.289 13:05:58 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:18.548 13:05:59 -- host/fio.sh@53 -- # ls_guid=9500f471-cb84-40c2-81cf-6179e9ba3d58 00:21:18.548 13:05:59 -- host/fio.sh@54 -- # get_lvs_free_mb 9500f471-cb84-40c2-81cf-6179e9ba3d58 00:21:18.548 13:05:59 -- common/autotest_common.sh@1353 -- # local lvs_uuid=9500f471-cb84-40c2-81cf-6179e9ba3d58 00:21:18.548 13:05:59 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:18.548 13:05:59 -- common/autotest_common.sh@1355 -- # local fc 00:21:18.548 13:05:59 -- common/autotest_common.sh@1356 -- # local cs 00:21:18.548 13:05:59 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:18.807 13:05:59 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:18.807 { 00:21:18.807 "base_bdev": "Nvme0n1", 00:21:18.807 "block_size": 4096, 00:21:18.807 "cluster_size": 1073741824, 00:21:18.807 "free_clusters": 4, 00:21:18.807 "name": "lvs_0", 00:21:18.807 "total_data_clusters": 4, 00:21:18.807 "uuid": "9500f471-cb84-40c2-81cf-6179e9ba3d58" 00:21:18.807 } 00:21:18.807 ]' 00:21:18.807 13:05:59 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="9500f471-cb84-40c2-81cf-6179e9ba3d58") .free_clusters' 00:21:18.807 13:05:59 -- common/autotest_common.sh@1358 -- # fc=4 00:21:18.807 13:05:59 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="9500f471-cb84-40c2-81cf-6179e9ba3d58") .cluster_size' 00:21:18.807 4096 00:21:18.807 13:05:59 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:18.807 13:05:59 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:18.807 13:05:59 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:18.807 13:05:59 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:19.065 d919ca4d-98c2-4408-a3df-5f2d2386e31a 00:21:19.065 13:05:59 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:19.324 13:05:59 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:19.582 13:06:00 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:19.840 13:06:00 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:19.840 13:06:00 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:19.840 13:06:00 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:19.840 13:06:00 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:19.840 13:06:00 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:19.840 13:06:00 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.840 13:06:00 -- common/autotest_common.sh@1330 -- # shift 00:21:19.840 13:06:00 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:19.840 13:06:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.840 13:06:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.840 13:06:00 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:19.840 13:06:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:19.840 13:06:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:19.840 13:06:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:19.840 13:06:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.840 13:06:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.840 13:06:00 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:19.840 13:06:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:19.840 13:06:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:19.840 13:06:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:19.840 13:06:00 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:19.840 13:06:00 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:20.099 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:20.099 fio-3.35 00:21:20.099 Starting 1 thread 00:21:22.632 00:21:22.632 test: (groupid=0, jobs=1): err= 0: pid=94704: Fri Dec 13 13:06:02 2024 00:21:22.632 read: IOPS=6381, BW=24.9MiB/s (26.1MB/s)(50.1MiB/2008msec) 00:21:22.632 slat (nsec): min=1931, max=373417, avg=2945.06, stdev=4588.91 00:21:22.632 clat (usec): min=4525, max=18064, avg=10688.70, stdev=1026.53 00:21:22.632 lat (usec): min=4534, max=18066, avg=10691.64, stdev=1026.43 00:21:22.632 clat percentiles (usec): 00:21:22.632 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9896], 00:21:22.632 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:21:22.632 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:21:22.632 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14746], 99.95th=[16319], 00:21:22.632 | 99.99th=[17171] 00:21:22.632 bw ( KiB/s): min=24296, max=26344, per=99.95%, avg=25514.00, stdev=866.33, samples=4 00:21:22.632 iops : min= 6074, max= 6586, avg=6378.50, stdev=216.58, samples=4 00:21:22.632 write: IOPS=6382, BW=24.9MiB/s (26.1MB/s)(50.1MiB/2008msec); 0 zone resets 00:21:22.632 slat (usec): min=2, max=325, avg= 3.10, stdev= 4.00 00:21:22.632 clat (usec): min=2614, max=17152, avg=9313.03, stdev=906.60 00:21:22.632 lat (usec): min=2625, max=17154, avg=9316.13, stdev=906.57 00:21:22.632 clat percentiles (usec): 00:21:22.632 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8586], 00:21:22.632 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:21:22.632 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:21:22.632 | 99.00th=[11338], 99.50th=[11731], 99.90th=[14877], 99.95th=[16057], 00:21:22.632 | 99.99th=[17171] 00:21:22.632 bw ( KiB/s): min=25152, max=26304, per=99.85%, avg=25494.00, stdev=542.40, samples=4 00:21:22.632 iops : min= 6288, max= 6576, avg=6373.50, stdev=135.60, samples=4 00:21:22.632 lat (msec) : 4=0.04%, 10=52.37%, 20=47.59% 00:21:22.632 cpu : usr=70.20%, sys=22.22%, ctx=43, majf=0, minf=5 00:21:22.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:22.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.632 issued rwts: total=12814,12817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.632 00:21:22.632 Run status group 0 (all jobs): 00:21:22.632 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.1MiB (52.5MB), run=2008-2008msec 00:21:22.632 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.1MiB (52.5MB), run=2008-2008msec 00:21:22.632 13:06:02 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:22.632 13:06:03 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:22.891 13:06:03 -- host/fio.sh@64 -- # ls_nested_guid=e40e9863-7ace-4487-a521-7bcd3c28df94 00:21:22.891 13:06:03 -- host/fio.sh@65 -- # get_lvs_free_mb e40e9863-7ace-4487-a521-7bcd3c28df94 00:21:22.891 13:06:03 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e40e9863-7ace-4487-a521-7bcd3c28df94 00:21:22.891 13:06:03 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:22.891 13:06:03 -- common/autotest_common.sh@1355 -- # local fc 00:21:22.891 13:06:03 -- common/autotest_common.sh@1356 -- # local cs 00:21:22.891 13:06:03 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:23.149 13:06:03 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:23.149 { 00:21:23.149 "base_bdev": "Nvme0n1", 00:21:23.149 "block_size": 4096, 00:21:23.149 "cluster_size": 1073741824, 00:21:23.149 "free_clusters": 0, 00:21:23.149 "name": "lvs_0", 00:21:23.149 "total_data_clusters": 4, 00:21:23.149 "uuid": "9500f471-cb84-40c2-81cf-6179e9ba3d58" 00:21:23.149 }, 00:21:23.149 { 00:21:23.149 "base_bdev": "d919ca4d-98c2-4408-a3df-5f2d2386e31a", 00:21:23.149 "block_size": 4096, 00:21:23.149 "cluster_size": 4194304, 00:21:23.149 "free_clusters": 1022, 00:21:23.149 "name": "lvs_n_0", 00:21:23.149 "total_data_clusters": 1022, 00:21:23.149 "uuid": "e40e9863-7ace-4487-a521-7bcd3c28df94" 00:21:23.149 } 00:21:23.149 ]' 00:21:23.149 13:06:03 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e40e9863-7ace-4487-a521-7bcd3c28df94") .free_clusters' 00:21:23.149 13:06:03 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:23.149 13:06:03 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e40e9863-7ace-4487-a521-7bcd3c28df94") .cluster_size' 00:21:23.149 13:06:03 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:23.149 13:06:03 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:23.149 4088 00:21:23.149 13:06:03 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:23.149 13:06:03 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:23.408 078e230a-d152-42a6-8f15-80c237704812 00:21:23.408 13:06:04 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:23.666 13:06:04 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:23.925 13:06:04 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:24.184 13:06:04 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.184 13:06:04 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.184 13:06:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:24.184 13:06:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:24.184 13:06:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:24.184 13:06:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.184 13:06:04 -- common/autotest_common.sh@1330 -- # shift 00:21:24.184 13:06:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:24.184 13:06:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.184 13:06:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.184 13:06:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:24.184 13:06:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:24.184 13:06:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:24.184 13:06:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:24.184 13:06:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.184 13:06:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.184 13:06:04 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:24.184 13:06:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:24.184 13:06:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:24.184 13:06:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:24.184 13:06:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:24.184 13:06:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.449 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:24.449 fio-3.35 00:21:24.449 Starting 1 thread 00:21:26.990 00:21:26.990 test: (groupid=0, jobs=1): err= 0: pid=94825: Fri Dec 13 13:06:07 2024 00:21:26.990 read: IOPS=5296, BW=20.7MiB/s (21.7MB/s)(41.6MiB/2009msec) 00:21:26.990 slat (nsec): min=1822, max=410658, avg=2881.14, stdev=5542.92 00:21:26.990 clat (usec): min=5680, max=19412, avg=12968.24, stdev=1444.26 00:21:26.990 lat (usec): min=5690, max=19415, avg=12971.12, stdev=1444.04 00:21:26.990 clat percentiles (usec): 00:21:26.990 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11207], 20.00th=[11731], 00:21:26.990 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:21:26.990 | 70.00th=[13698], 80.00th=[14222], 90.00th=[14877], 95.00th=[15401], 00:21:26.990 | 99.00th=[16450], 99.50th=[16909], 99.90th=[18482], 99.95th=[18744], 00:21:26.990 | 99.99th=[19268] 00:21:26.990 bw ( KiB/s): min=20016, max=22248, per=99.81%, avg=21146.00, stdev=914.72, samples=4 00:21:26.990 iops : min= 5004, max= 5562, avg=5286.50, stdev=228.68, samples=4 00:21:26.990 write: IOPS=5287, BW=20.7MiB/s (21.7MB/s)(41.5MiB/2009msec); 0 zone resets 00:21:26.990 slat (nsec): min=1921, max=351444, avg=3055.61, stdev=4177.92 00:21:26.990 clat (usec): min=2852, max=19451, avg=11108.48, stdev=1215.92 00:21:26.990 lat (usec): min=2864, max=19453, avg=11111.54, stdev=1215.89 00:21:26.990 clat percentiles (usec): 00:21:26.990 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10159], 00:21:26.990 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:21:26.990 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12649], 95.00th=[13042], 00:21:26.990 | 99.00th=[13960], 99.50th=[14353], 99.90th=[17433], 99.95th=[18482], 00:21:26.990 | 99.99th=[18744] 00:21:26.990 bw ( KiB/s): min=20672, max=21696, per=99.89%, avg=21126.00, stdev=433.84, samples=4 00:21:26.990 iops : min= 5168, max= 5424, avg=5281.50, stdev=108.46, samples=4 00:21:26.990 lat (msec) : 4=0.02%, 10=9.00%, 20=90.98% 00:21:26.990 cpu : usr=70.82%, sys=22.56%, ctx=158, majf=0, minf=5 00:21:26.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:26.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:26.990 issued rwts: total=10641,10622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:26.990 00:21:26.990 Run status group 0 (all jobs): 00:21:26.990 READ: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=41.6MiB (43.6MB), run=2009-2009msec 00:21:26.990 WRITE: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=41.5MiB (43.5MB), run=2009-2009msec 00:21:26.990 13:06:07 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:26.990 13:06:07 -- host/fio.sh@74 -- # sync 00:21:26.990 13:06:07 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:27.249 13:06:07 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:27.507 13:06:08 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:27.766 13:06:08 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:28.025 13:06:08 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:28.960 13:06:09 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:28.960 13:06:09 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:28.960 13:06:09 -- host/fio.sh@86 -- # nvmftestfini 00:21:28.960 13:06:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:28.960 13:06:09 -- nvmf/common.sh@116 -- # sync 00:21:28.960 13:06:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:28.960 13:06:09 -- nvmf/common.sh@119 -- # set +e 00:21:28.960 13:06:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:28.960 13:06:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:28.960 rmmod nvme_tcp 00:21:28.960 rmmod nvme_fabrics 00:21:28.960 rmmod nvme_keyring 00:21:28.960 13:06:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:28.960 13:06:09 -- nvmf/common.sh@123 -- # set -e 00:21:28.960 13:06:09 -- nvmf/common.sh@124 -- # return 0 00:21:28.960 13:06:09 -- nvmf/common.sh@477 -- # '[' -n 94371 ']' 00:21:28.960 13:06:09 -- nvmf/common.sh@478 -- # killprocess 94371 00:21:28.960 13:06:09 -- common/autotest_common.sh@936 -- # '[' -z 94371 ']' 00:21:28.960 13:06:09 -- common/autotest_common.sh@940 -- # kill -0 94371 00:21:28.960 13:06:09 -- common/autotest_common.sh@941 -- # uname 00:21:28.960 13:06:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:28.960 13:06:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94371 00:21:28.960 13:06:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:28.960 13:06:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:28.960 13:06:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94371' 00:21:28.960 killing process with pid 94371 00:21:28.960 13:06:09 -- common/autotest_common.sh@955 -- # kill 94371 00:21:28.960 13:06:09 -- common/autotest_common.sh@960 -- # wait 94371 00:21:29.219 13:06:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:29.219 13:06:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:29.219 13:06:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:29.219 13:06:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.219 13:06:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:29.219 13:06:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.219 13:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.219 13:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.219 13:06:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:29.219 ************************************ 00:21:29.219 END TEST nvmf_fio_host 00:21:29.219 ************************************ 00:21:29.219 00:21:29.219 real 0m19.871s 00:21:29.219 user 1m27.263s 00:21:29.219 sys 0m4.390s 00:21:29.219 13:06:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:29.219 13:06:09 -- common/autotest_common.sh@10 -- # set +x 00:21:29.219 13:06:09 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:29.219 13:06:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:29.219 13:06:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:29.219 13:06:09 -- common/autotest_common.sh@10 -- # set +x 00:21:29.219 ************************************ 00:21:29.219 START TEST nvmf_failover 00:21:29.219 ************************************ 00:21:29.219 13:06:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:29.477 * Looking for test storage... 00:21:29.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:29.478 13:06:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:29.478 13:06:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:29.478 13:06:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:29.478 13:06:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:29.478 13:06:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:29.478 13:06:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:29.478 13:06:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:29.478 13:06:10 -- scripts/common.sh@335 -- # IFS=.-: 00:21:29.478 13:06:10 -- scripts/common.sh@335 -- # read -ra ver1 00:21:29.478 13:06:10 -- scripts/common.sh@336 -- # IFS=.-: 00:21:29.478 13:06:10 -- scripts/common.sh@336 -- # read -ra ver2 00:21:29.478 13:06:10 -- scripts/common.sh@337 -- # local 'op=<' 00:21:29.478 13:06:10 -- scripts/common.sh@339 -- # ver1_l=2 00:21:29.478 13:06:10 -- scripts/common.sh@340 -- # ver2_l=1 00:21:29.478 13:06:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:29.478 13:06:10 -- scripts/common.sh@343 -- # case "$op" in 00:21:29.478 13:06:10 -- scripts/common.sh@344 -- # : 1 00:21:29.478 13:06:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:29.478 13:06:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.478 13:06:10 -- scripts/common.sh@364 -- # decimal 1 00:21:29.478 13:06:10 -- scripts/common.sh@352 -- # local d=1 00:21:29.478 13:06:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:29.478 13:06:10 -- scripts/common.sh@354 -- # echo 1 00:21:29.478 13:06:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:29.478 13:06:10 -- scripts/common.sh@365 -- # decimal 2 00:21:29.478 13:06:10 -- scripts/common.sh@352 -- # local d=2 00:21:29.478 13:06:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:29.478 13:06:10 -- scripts/common.sh@354 -- # echo 2 00:21:29.478 13:06:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:29.478 13:06:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:29.478 13:06:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:29.478 13:06:10 -- scripts/common.sh@367 -- # return 0 00:21:29.478 13:06:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:29.478 13:06:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:29.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.478 --rc genhtml_branch_coverage=1 00:21:29.478 --rc genhtml_function_coverage=1 00:21:29.478 --rc genhtml_legend=1 00:21:29.478 --rc geninfo_all_blocks=1 00:21:29.478 --rc geninfo_unexecuted_blocks=1 00:21:29.478 00:21:29.478 ' 00:21:29.478 13:06:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:29.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.478 --rc genhtml_branch_coverage=1 00:21:29.478 --rc genhtml_function_coverage=1 00:21:29.478 --rc genhtml_legend=1 00:21:29.478 --rc geninfo_all_blocks=1 00:21:29.478 --rc geninfo_unexecuted_blocks=1 00:21:29.478 00:21:29.478 ' 00:21:29.478 13:06:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:29.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.478 --rc genhtml_branch_coverage=1 00:21:29.478 --rc genhtml_function_coverage=1 00:21:29.478 --rc genhtml_legend=1 00:21:29.478 --rc geninfo_all_blocks=1 00:21:29.478 --rc geninfo_unexecuted_blocks=1 00:21:29.478 00:21:29.478 ' 00:21:29.478 13:06:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:29.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.478 --rc genhtml_branch_coverage=1 00:21:29.478 --rc genhtml_function_coverage=1 00:21:29.478 --rc genhtml_legend=1 00:21:29.478 --rc geninfo_all_blocks=1 00:21:29.478 --rc geninfo_unexecuted_blocks=1 00:21:29.478 00:21:29.478 ' 00:21:29.478 13:06:10 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:29.478 13:06:10 -- nvmf/common.sh@7 -- # uname -s 00:21:29.478 13:06:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.478 13:06:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.478 13:06:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.478 13:06:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.478 13:06:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.478 13:06:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.478 13:06:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.478 13:06:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.478 13:06:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.478 13:06:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.478 13:06:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:21:29.478 13:06:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:21:29.478 13:06:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.478 13:06:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.478 13:06:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:29.478 13:06:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:29.478 13:06:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.478 13:06:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.478 13:06:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.478 13:06:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.478 13:06:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.478 13:06:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.478 13:06:10 -- paths/export.sh@5 -- # export PATH 00:21:29.478 13:06:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.478 13:06:10 -- nvmf/common.sh@46 -- # : 0 00:21:29.478 13:06:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:29.478 13:06:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:29.478 13:06:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:29.478 13:06:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.478 13:06:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.478 13:06:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:29.478 13:06:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:29.478 13:06:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:29.478 13:06:10 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:29.478 13:06:10 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:29.478 13:06:10 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:29.478 13:06:10 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:29.478 13:06:10 -- host/failover.sh@18 -- # nvmftestinit 00:21:29.478 13:06:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:29.478 13:06:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.478 13:06:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:29.478 13:06:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:29.478 13:06:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:29.478 13:06:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.478 13:06:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.478 13:06:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.478 13:06:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:29.478 13:06:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:29.478 13:06:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:29.478 13:06:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:29.478 13:06:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:29.478 13:06:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:29.478 13:06:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.478 13:06:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.478 13:06:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:29.478 13:06:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:29.478 13:06:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:29.478 13:06:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:29.478 13:06:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:29.478 13:06:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.478 13:06:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:29.478 13:06:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:29.478 13:06:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:29.478 13:06:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:29.478 13:06:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:29.478 13:06:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:29.478 Cannot find device "nvmf_tgt_br" 00:21:29.478 13:06:10 -- nvmf/common.sh@154 -- # true 00:21:29.478 13:06:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:29.478 Cannot find device "nvmf_tgt_br2" 00:21:29.478 13:06:10 -- nvmf/common.sh@155 -- # true 00:21:29.478 13:06:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:29.478 13:06:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:29.478 Cannot find device "nvmf_tgt_br" 00:21:29.478 13:06:10 -- nvmf/common.sh@157 -- # true 00:21:29.478 13:06:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:29.478 Cannot find device "nvmf_tgt_br2" 00:21:29.478 13:06:10 -- nvmf/common.sh@158 -- # true 00:21:29.478 13:06:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:29.737 13:06:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:29.737 13:06:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:29.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:29.737 13:06:10 -- nvmf/common.sh@161 -- # true 00:21:29.737 13:06:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:29.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:29.737 13:06:10 -- nvmf/common.sh@162 -- # true 00:21:29.737 13:06:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:29.737 13:06:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:29.737 13:06:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:29.737 13:06:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:29.737 13:06:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:29.737 13:06:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:29.737 13:06:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:29.737 13:06:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:29.737 13:06:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:29.737 13:06:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:29.737 13:06:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:29.737 13:06:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:29.737 13:06:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:29.737 13:06:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:29.737 13:06:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:29.737 13:06:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:29.737 13:06:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:29.737 13:06:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:29.737 13:06:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:29.737 13:06:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:29.737 13:06:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:29.737 13:06:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:29.737 13:06:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:29.737 13:06:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:29.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:21:29.737 00:21:29.737 --- 10.0.0.2 ping statistics --- 00:21:29.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.737 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:29.737 13:06:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:29.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:29.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:21:29.737 00:21:29.737 --- 10.0.0.3 ping statistics --- 00:21:29.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.737 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:29.737 13:06:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:29.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:29.738 00:21:29.738 --- 10.0.0.1 ping statistics --- 00:21:29.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.738 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:29.738 13:06:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.738 13:06:10 -- nvmf/common.sh@421 -- # return 0 00:21:29.738 13:06:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:29.738 13:06:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.738 13:06:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:29.738 13:06:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:29.738 13:06:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.738 13:06:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:29.738 13:06:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:29.738 13:06:10 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:29.738 13:06:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:29.738 13:06:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.738 13:06:10 -- common/autotest_common.sh@10 -- # set +x 00:21:30.006 13:06:10 -- nvmf/common.sh@469 -- # nvmfpid=95109 00:21:30.007 13:06:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:30.007 13:06:10 -- nvmf/common.sh@470 -- # waitforlisten 95109 00:21:30.007 13:06:10 -- common/autotest_common.sh@829 -- # '[' -z 95109 ']' 00:21:30.007 13:06:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.007 13:06:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.007 13:06:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.007 13:06:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.007 13:06:10 -- common/autotest_common.sh@10 -- # set +x 00:21:30.007 [2024-12-13 13:06:10.568335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:30.007 [2024-12-13 13:06:10.568416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.007 [2024-12-13 13:06:10.706249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:30.301 [2024-12-13 13:06:10.807863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:30.301 [2024-12-13 13:06:10.808319] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.301 [2024-12-13 13:06:10.808447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.301 [2024-12-13 13:06:10.808670] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.301 [2024-12-13 13:06:10.808920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.301 [2024-12-13 13:06:10.808981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.301 [2024-12-13 13:06:10.808993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.884 13:06:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.884 13:06:11 -- common/autotest_common.sh@862 -- # return 0 00:21:30.884 13:06:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:30.884 13:06:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.884 13:06:11 -- common/autotest_common.sh@10 -- # set +x 00:21:30.884 13:06:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.884 13:06:11 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:31.451 [2024-12-13 13:06:11.943212] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.451 13:06:11 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:31.451 Malloc0 00:21:31.709 13:06:12 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:31.709 13:06:12 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.967 13:06:12 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.226 [2024-12-13 13:06:12.900648] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.226 13:06:12 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:32.484 [2024-12-13 13:06:13.128842] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:32.484 13:06:13 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:32.743 [2024-12-13 13:06:13.357197] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:32.743 13:06:13 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:32.743 13:06:13 -- host/failover.sh@31 -- # bdevperf_pid=95221 00:21:32.743 13:06:13 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.743 13:06:13 -- host/failover.sh@34 -- # waitforlisten 95221 /var/tmp/bdevperf.sock 00:21:32.743 13:06:13 -- common/autotest_common.sh@829 -- # '[' -z 95221 ']' 00:21:32.743 13:06:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.743 13:06:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.743 13:06:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.743 13:06:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.743 13:06:13 -- common/autotest_common.sh@10 -- # set +x 00:21:34.121 13:06:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.121 13:06:14 -- common/autotest_common.sh@862 -- # return 0 00:21:34.121 13:06:14 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:34.121 NVMe0n1 00:21:34.121 13:06:14 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:34.380 00:21:34.380 13:06:15 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.380 13:06:15 -- host/failover.sh@39 -- # run_test_pid=95267 00:21:34.380 13:06:15 -- host/failover.sh@41 -- # sleep 1 00:21:35.755 13:06:16 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.755 [2024-12-13 13:06:16.393198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.755 [2024-12-13 13:06:16.393399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393536] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 [2024-12-13 13:06:16.393551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bab0 is same with the state(5) to be set 00:21:35.756 13:06:16 -- host/failover.sh@45 -- # sleep 3 00:21:39.041 13:06:19 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.041 00:21:39.041 13:06:19 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:39.300 [2024-12-13 13:06:19.969369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c920 is same with the state(5) to be set 00:21:39.300 [2024-12-13 13:06:19.969464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c920 is same with the state(5) to be set 00:21:39.300 [2024-12-13 13:06:19.969491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c920 is same with the state(5) to be set 00:21:39.300 [2024-12-13 13:06:19.969500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c920 is same with the state(5) to be set 00:21:39.300 [2024-12-13 13:06:19.969510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c920 is same with the state(5) to be set 00:21:39.300 [2024-12-13 13:06:19.969519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c920 is same with the state(5) to be set 00:21:39.300 [2024-12-13 13:06:19.969528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c920 is same with the state(5) to be set 00:21:39.300 13:06:19 -- host/failover.sh@50 -- # sleep 3 00:21:42.585 13:06:22 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.585 [2024-12-13 13:06:23.252592] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.585 13:06:23 -- host/failover.sh@55 -- # sleep 1 00:21:43.520 13:06:24 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:43.778 [2024-12-13 13:06:24.479794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.479995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.480004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.480027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.480053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.480063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.480072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.778 [2024-12-13 13:06:24.480081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480462] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 [2024-12-13 13:06:24.480616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e040 is same with the state(5) to be set 00:21:43.779 13:06:24 -- host/failover.sh@59 -- # wait 95267 00:21:50.349 0 00:21:50.349 13:06:30 -- host/failover.sh@61 -- # killprocess 95221 00:21:50.349 13:06:30 -- common/autotest_common.sh@936 -- # '[' -z 95221 ']' 00:21:50.349 13:06:30 -- common/autotest_common.sh@940 -- # kill -0 95221 00:21:50.349 13:06:30 -- common/autotest_common.sh@941 -- # uname 00:21:50.349 13:06:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:50.349 13:06:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95221 00:21:50.349 killing process with pid 95221 00:21:50.349 13:06:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:50.349 13:06:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:50.349 13:06:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95221' 00:21:50.349 13:06:30 -- common/autotest_common.sh@955 -- # kill 95221 00:21:50.349 13:06:30 -- common/autotest_common.sh@960 -- # wait 95221 00:21:50.349 13:06:30 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:50.349 [2024-12-13 13:06:13.419651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:50.349 [2024-12-13 13:06:13.419758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95221 ] 00:21:50.349 [2024-12-13 13:06:13.555363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.349 [2024-12-13 13:06:13.627053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.349 Running I/O for 15 seconds... 00:21:50.349 [2024-12-13 13:06:16.393831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.393904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.393932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.393948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.393965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.393980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.393995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.394035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.394074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.394117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.394174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.394199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.394225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.394250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.394312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.349 [2024-12-13 13:06:16.394340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.349 [2024-12-13 13:06:16.394352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.394973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.394988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.350 [2024-12-13 13:06:16.395357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.350 [2024-12-13 13:06:16.395460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.350 [2024-12-13 13:06:16.395524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.350 [2024-12-13 13:06:16.395593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.350 [2024-12-13 13:06:16.395665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.350 [2024-12-13 13:06:16.395692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.350 [2024-12-13 13:06:16.395719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.350 [2024-12-13 13:06:16.395746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.350 [2024-12-13 13:06:16.395787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.395800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.395815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.395840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.395857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.395870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.395885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.395898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.395913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.395927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.395941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.395955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.395970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.395983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.395998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.351 [2024-12-13 13:06:16.396432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.351 [2024-12-13 13:06:16.396459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.351 [2024-12-13 13:06:16.396493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.351 [2024-12-13 13:06:16.396547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.351 [2024-12-13 13:06:16.396574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.351 [2024-12-13 13:06:16.396911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.396981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.396994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.397009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.351 [2024-12-13 13:06:16.397038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.397064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.351 [2024-12-13 13:06:16.397078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.397093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.351 [2024-12-13 13:06:16.397107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.351 [2024-12-13 13:06:16.397122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.352 [2024-12-13 13:06:16.397670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.397983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.397998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.398011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.398061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.398089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.398117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.398173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.398208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.352 [2024-12-13 13:06:16.398235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bd40 is same with the state(5) to be set 00:21:50.352 [2024-12-13 13:06:16.398265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.352 [2024-12-13 13:06:16.398281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.352 [2024-12-13 13:06:16.398298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3432 len:8 PRP1 0x0 PRP2 0x0 00:21:50.352 [2024-12-13 13:06:16.398311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398366] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x181bd40 was disconnected and freed. reset controller. 00:21:50.352 [2024-12-13 13:06:16.398383] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:50.352 [2024-12-13 13:06:16.398466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.352 [2024-12-13 13:06:16.398486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.352 [2024-12-13 13:06:16.398514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.352 [2024-12-13 13:06:16.398540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.352 [2024-12-13 13:06:16.398565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.352 [2024-12-13 13:06:16.398578] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.352 [2024-12-13 13:06:16.398643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9940 (9): Bad file descriptor 00:21:50.352 [2024-12-13 13:06:16.401191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.353 [2024-12-13 13:06:16.428907] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:50.353 [2024-12-13 13:06:19.969915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.969969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.353 [2024-12-13 13:06:19.970112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.353 [2024-12-13 13:06:19.970196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.353 [2024-12-13 13:06:19.970765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.353 [2024-12-13 13:06:19.970816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.353 [2024-12-13 13:06:19.970901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.353 [2024-12-13 13:06:19.970929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.970982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.970995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.971011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.971024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.971048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.971062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.971077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.353 [2024-12-13 13:06:19.971091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.971134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.353 [2024-12-13 13:06:19.971150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.353 [2024-12-13 13:06:19.971165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.971193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.971222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.971308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.971336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.971372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.971464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.971519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.971555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.971610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.971976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.971988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.972256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.972289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.972315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.972391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.972441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.354 [2024-12-13 13:06:19.972467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.354 [2024-12-13 13:06:19.972492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.354 [2024-12-13 13:06:19.972505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.972517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.972542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.972568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.972624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.972669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.972707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.972733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.972776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.972813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.972852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.972891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.972918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.972945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.972972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.972987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.973905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.355 [2024-12-13 13:06:19.973972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.355 [2024-12-13 13:06:19.973986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.355 [2024-12-13 13:06:19.974005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.356 [2024-12-13 13:06:19.974117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.356 [2024-12-13 13:06:19.974184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:19.974397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5d90 is same with the state(5) to be set 00:21:50.356 [2024-12-13 13:06:19.974447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.356 [2024-12-13 13:06:19.974463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.356 [2024-12-13 13:06:19.974474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84504 len:8 PRP1 0x0 PRP2 0x0 00:21:50.356 [2024-12-13 13:06:19.974487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974540] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17f5d90 was disconnected and freed. reset controller. 00:21:50.356 [2024-12-13 13:06:19.974557] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:50.356 [2024-12-13 13:06:19.974632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.356 [2024-12-13 13:06:19.974654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.356 [2024-12-13 13:06:19.974681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.356 [2024-12-13 13:06:19.974706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.356 [2024-12-13 13:06:19.974731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:19.974743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.356 [2024-12-13 13:06:19.974819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9940 (9): Bad file descriptor 00:21:50.356 [2024-12-13 13:06:19.977407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.356 [2024-12-13 13:06:20.008411] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:50.356 [2024-12-13 13:06:24.480740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.480827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.480856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.480873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.480890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.480903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.480919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.480932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.480948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.480962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.480995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.356 [2024-12-13 13:06:24.481441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.356 [2024-12-13 13:06:24.481453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.481980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.481994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.357 [2024-12-13 13:06:24.482089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.357 [2024-12-13 13:06:24.482264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.357 [2024-12-13 13:06:24.482289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.357 [2024-12-13 13:06:24.482403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.357 [2024-12-13 13:06:24.482453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.357 [2024-12-13 13:06:24.482504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.357 [2024-12-13 13:06:24.482560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.357 [2024-12-13 13:06:24.482610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.357 [2024-12-13 13:06:24.482674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.357 [2024-12-13 13:06:24.482686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.482700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.482712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.482732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.482745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.482790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.482803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.482818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.482844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.482861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.482875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.482890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.482903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.482917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.482931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.482954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.482967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.482982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.482996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.358 [2024-12-13 13:06:24.483887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.358 [2024-12-13 13:06:24.483932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.358 [2024-12-13 13:06:24.483945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.483960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.483973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.359 [2024-12-13 13:06:24.484655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.359 [2024-12-13 13:06:24.484954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.484969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181cd50 is same with the state(5) to be set 00:21:50.359 [2024-12-13 13:06:24.484984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.359 [2024-12-13 13:06:24.484994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.359 [2024-12-13 13:06:24.485005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32640 len:8 PRP1 0x0 PRP2 0x0 00:21:50.359 [2024-12-13 13:06:24.485028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.485082] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x181cd50 was disconnected and freed. reset controller. 00:21:50.359 [2024-12-13 13:06:24.485100] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:50.359 [2024-12-13 13:06:24.485152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.359 [2024-12-13 13:06:24.485173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.485198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.359 [2024-12-13 13:06:24.485212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.485241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.359 [2024-12-13 13:06:24.485254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.485267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.359 [2024-12-13 13:06:24.485279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.359 [2024-12-13 13:06:24.485292] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.359 [2024-12-13 13:06:24.485323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9940 (9): Bad file descriptor 00:21:50.359 [2024-12-13 13:06:24.487911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.359 [2024-12-13 13:06:24.521963] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:50.359 00:21:50.359 Latency(us) 00:21:50.359 [2024-12-13T13:06:31.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.359 [2024-12-13T13:06:31.135Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:50.359 Verification LBA range: start 0x0 length 0x4000 00:21:50.359 NVMe0n1 : 15.01 13156.81 51.39 318.04 0.00 9483.09 655.36 15966.95 00:21:50.359 [2024-12-13T13:06:31.136Z] =================================================================================================================== 00:21:50.360 [2024-12-13T13:06:31.136Z] Total : 13156.81 51.39 318.04 0.00 9483.09 655.36 15966.95 00:21:50.360 Received shutdown signal, test time was about 15.000000 seconds 00:21:50.360 00:21:50.360 Latency(us) 00:21:50.360 [2024-12-13T13:06:31.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.360 [2024-12-13T13:06:31.136Z] =================================================================================================================== 00:21:50.360 [2024-12-13T13:06:31.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.360 13:06:30 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:50.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.360 13:06:30 -- host/failover.sh@65 -- # count=3 00:21:50.360 13:06:30 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:50.360 13:06:30 -- host/failover.sh@73 -- # bdevperf_pid=95467 00:21:50.360 13:06:30 -- host/failover.sh@75 -- # waitforlisten 95467 /var/tmp/bdevperf.sock 00:21:50.360 13:06:30 -- common/autotest_common.sh@829 -- # '[' -z 95467 ']' 00:21:50.360 13:06:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.360 13:06:30 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:50.360 13:06:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.360 13:06:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.360 13:06:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.360 13:06:30 -- common/autotest_common.sh@10 -- # set +x 00:21:50.928 13:06:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.928 13:06:31 -- common/autotest_common.sh@862 -- # return 0 00:21:50.928 13:06:31 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:51.188 [2024-12-13 13:06:31.792680] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:51.188 13:06:31 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:51.447 [2024-12-13 13:06:32.024800] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:51.447 13:06:32 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.706 NVMe0n1 00:21:51.706 13:06:32 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.965 00:21:51.965 13:06:32 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.224 00:21:52.224 13:06:32 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:52.224 13:06:32 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:52.483 13:06:33 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.742 13:06:33 -- host/failover.sh@87 -- # sleep 3 00:21:56.035 13:06:36 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:56.035 13:06:36 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.035 13:06:36 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.035 13:06:36 -- host/failover.sh@90 -- # run_test_pid=95608 00:21:56.035 13:06:36 -- host/failover.sh@92 -- # wait 95608 00:21:57.450 0 00:21:57.450 13:06:37 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:57.450 [2024-12-13 13:06:30.561631] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:57.450 [2024-12-13 13:06:30.561784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95467 ] 00:21:57.450 [2024-12-13 13:06:30.703508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.450 [2024-12-13 13:06:30.778706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.450 [2024-12-13 13:06:33.422277] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:57.450 [2024-12-13 13:06:33.422390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.450 [2024-12-13 13:06:33.422414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.450 [2024-12-13 13:06:33.422430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.450 [2024-12-13 13:06:33.422443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.450 [2024-12-13 13:06:33.422456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.450 [2024-12-13 13:06:33.422468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.450 [2024-12-13 13:06:33.422481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.450 [2024-12-13 13:06:33.422493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.450 [2024-12-13 13:06:33.422505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:57.450 [2024-12-13 13:06:33.422551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:57.450 [2024-12-13 13:06:33.422581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83b940 (9): Bad file descriptor 00:21:57.450 [2024-12-13 13:06:33.434385] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.450 Running I/O for 1 seconds... 00:21:57.450 00:21:57.450 Latency(us) 00:21:57.450 [2024-12-13T13:06:38.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.450 [2024-12-13T13:06:38.226Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:57.450 Verification LBA range: start 0x0 length 0x4000 00:21:57.450 NVMe0n1 : 1.01 14228.42 55.58 0.00 0.00 8953.15 1169.22 13881.72 00:21:57.450 [2024-12-13T13:06:38.226Z] =================================================================================================================== 00:21:57.450 [2024-12-13T13:06:38.226Z] Total : 14228.42 55.58 0.00 0.00 8953.15 1169.22 13881.72 00:21:57.450 13:06:37 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.450 13:06:37 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:57.450 13:06:38 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.708 13:06:38 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:57.708 13:06:38 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.967 13:06:38 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:58.226 13:06:38 -- host/failover.sh@101 -- # sleep 3 00:22:01.518 13:06:41 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.518 13:06:41 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:01.518 13:06:42 -- host/failover.sh@108 -- # killprocess 95467 00:22:01.518 13:06:42 -- common/autotest_common.sh@936 -- # '[' -z 95467 ']' 00:22:01.518 13:06:42 -- common/autotest_common.sh@940 -- # kill -0 95467 00:22:01.518 13:06:42 -- common/autotest_common.sh@941 -- # uname 00:22:01.518 13:06:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:01.518 13:06:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95467 00:22:01.518 killing process with pid 95467 00:22:01.518 13:06:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:01.518 13:06:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:01.518 13:06:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95467' 00:22:01.518 13:06:42 -- common/autotest_common.sh@955 -- # kill 95467 00:22:01.518 13:06:42 -- common/autotest_common.sh@960 -- # wait 95467 00:22:01.777 13:06:42 -- host/failover.sh@110 -- # sync 00:22:01.777 13:06:42 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:02.036 13:06:42 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:02.036 13:06:42 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:02.036 13:06:42 -- host/failover.sh@116 -- # nvmftestfini 00:22:02.036 13:06:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:02.036 13:06:42 -- nvmf/common.sh@116 -- # sync 00:22:02.036 13:06:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:02.036 13:06:42 -- nvmf/common.sh@119 -- # set +e 00:22:02.036 13:06:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:02.036 13:06:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:02.036 rmmod nvme_tcp 00:22:02.036 rmmod nvme_fabrics 00:22:02.036 rmmod nvme_keyring 00:22:02.036 13:06:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:02.036 13:06:42 -- nvmf/common.sh@123 -- # set -e 00:22:02.036 13:06:42 -- nvmf/common.sh@124 -- # return 0 00:22:02.036 13:06:42 -- nvmf/common.sh@477 -- # '[' -n 95109 ']' 00:22:02.036 13:06:42 -- nvmf/common.sh@478 -- # killprocess 95109 00:22:02.036 13:06:42 -- common/autotest_common.sh@936 -- # '[' -z 95109 ']' 00:22:02.036 13:06:42 -- common/autotest_common.sh@940 -- # kill -0 95109 00:22:02.036 13:06:42 -- common/autotest_common.sh@941 -- # uname 00:22:02.036 13:06:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:02.036 13:06:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95109 00:22:02.036 killing process with pid 95109 00:22:02.036 13:06:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:02.036 13:06:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:02.036 13:06:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95109' 00:22:02.036 13:06:42 -- common/autotest_common.sh@955 -- # kill 95109 00:22:02.036 13:06:42 -- common/autotest_common.sh@960 -- # wait 95109 00:22:02.603 13:06:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:02.603 13:06:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:02.603 13:06:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:02.603 13:06:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.603 13:06:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:02.603 13:06:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.603 13:06:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.603 13:06:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.603 13:06:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:02.603 00:22:02.603 real 0m33.161s 00:22:02.603 user 2m8.194s 00:22:02.603 sys 0m5.063s 00:22:02.603 13:06:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:02.603 13:06:43 -- common/autotest_common.sh@10 -- # set +x 00:22:02.603 ************************************ 00:22:02.603 END TEST nvmf_failover 00:22:02.603 ************************************ 00:22:02.603 13:06:43 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:02.603 13:06:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:02.603 13:06:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:02.603 13:06:43 -- common/autotest_common.sh@10 -- # set +x 00:22:02.603 ************************************ 00:22:02.603 START TEST nvmf_discovery 00:22:02.603 ************************************ 00:22:02.603 13:06:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:02.603 * Looking for test storage... 00:22:02.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:02.603 13:06:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:02.603 13:06:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:02.603 13:06:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:02.603 13:06:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:02.603 13:06:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:02.603 13:06:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:02.603 13:06:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:02.603 13:06:43 -- scripts/common.sh@335 -- # IFS=.-: 00:22:02.603 13:06:43 -- scripts/common.sh@335 -- # read -ra ver1 00:22:02.603 13:06:43 -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.603 13:06:43 -- scripts/common.sh@336 -- # read -ra ver2 00:22:02.603 13:06:43 -- scripts/common.sh@337 -- # local 'op=<' 00:22:02.603 13:06:43 -- scripts/common.sh@339 -- # ver1_l=2 00:22:02.603 13:06:43 -- scripts/common.sh@340 -- # ver2_l=1 00:22:02.603 13:06:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:02.603 13:06:43 -- scripts/common.sh@343 -- # case "$op" in 00:22:02.603 13:06:43 -- scripts/common.sh@344 -- # : 1 00:22:02.603 13:06:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:02.604 13:06:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.604 13:06:43 -- scripts/common.sh@364 -- # decimal 1 00:22:02.604 13:06:43 -- scripts/common.sh@352 -- # local d=1 00:22:02.604 13:06:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.604 13:06:43 -- scripts/common.sh@354 -- # echo 1 00:22:02.604 13:06:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:02.604 13:06:43 -- scripts/common.sh@365 -- # decimal 2 00:22:02.604 13:06:43 -- scripts/common.sh@352 -- # local d=2 00:22:02.604 13:06:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.604 13:06:43 -- scripts/common.sh@354 -- # echo 2 00:22:02.604 13:06:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:02.604 13:06:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:02.604 13:06:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:02.604 13:06:43 -- scripts/common.sh@367 -- # return 0 00:22:02.604 13:06:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.604 13:06:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:02.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.604 --rc genhtml_branch_coverage=1 00:22:02.604 --rc genhtml_function_coverage=1 00:22:02.604 --rc genhtml_legend=1 00:22:02.604 --rc geninfo_all_blocks=1 00:22:02.604 --rc geninfo_unexecuted_blocks=1 00:22:02.604 00:22:02.604 ' 00:22:02.604 13:06:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:02.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.604 --rc genhtml_branch_coverage=1 00:22:02.604 --rc genhtml_function_coverage=1 00:22:02.604 --rc genhtml_legend=1 00:22:02.604 --rc geninfo_all_blocks=1 00:22:02.604 --rc geninfo_unexecuted_blocks=1 00:22:02.604 00:22:02.604 ' 00:22:02.604 13:06:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:02.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.604 --rc genhtml_branch_coverage=1 00:22:02.604 --rc genhtml_function_coverage=1 00:22:02.604 --rc genhtml_legend=1 00:22:02.604 --rc geninfo_all_blocks=1 00:22:02.604 --rc geninfo_unexecuted_blocks=1 00:22:02.604 00:22:02.604 ' 00:22:02.604 13:06:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:02.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.604 --rc genhtml_branch_coverage=1 00:22:02.604 --rc genhtml_function_coverage=1 00:22:02.604 --rc genhtml_legend=1 00:22:02.604 --rc geninfo_all_blocks=1 00:22:02.604 --rc geninfo_unexecuted_blocks=1 00:22:02.604 00:22:02.604 ' 00:22:02.604 13:06:43 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:02.604 13:06:43 -- nvmf/common.sh@7 -- # uname -s 00:22:02.604 13:06:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.604 13:06:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.604 13:06:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.604 13:06:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.604 13:06:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.604 13:06:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.604 13:06:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.604 13:06:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.604 13:06:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.604 13:06:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.604 13:06:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:22:02.604 13:06:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:22:02.604 13:06:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.604 13:06:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.604 13:06:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:02.604 13:06:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:02.604 13:06:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.604 13:06:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.604 13:06:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.604 13:06:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.604 13:06:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.604 13:06:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.604 13:06:43 -- paths/export.sh@5 -- # export PATH 00:22:02.604 13:06:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.604 13:06:43 -- nvmf/common.sh@46 -- # : 0 00:22:02.604 13:06:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:02.604 13:06:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:02.604 13:06:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:02.604 13:06:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.604 13:06:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.604 13:06:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:02.604 13:06:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:02.604 13:06:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:02.604 13:06:43 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:02.604 13:06:43 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:02.604 13:06:43 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:02.604 13:06:43 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:02.604 13:06:43 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:02.604 13:06:43 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:02.604 13:06:43 -- host/discovery.sh@25 -- # nvmftestinit 00:22:02.604 13:06:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:02.604 13:06:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.604 13:06:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:02.604 13:06:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:02.604 13:06:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:02.604 13:06:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.604 13:06:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.604 13:06:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.604 13:06:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:02.604 13:06:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:02.604 13:06:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:02.604 13:06:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:02.604 13:06:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:02.604 13:06:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:02.604 13:06:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.604 13:06:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.604 13:06:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:02.604 13:06:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:02.604 13:06:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:02.604 13:06:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:02.604 13:06:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:02.604 13:06:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.604 13:06:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:02.604 13:06:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:02.604 13:06:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:02.604 13:06:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:02.604 13:06:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:02.863 13:06:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:02.863 Cannot find device "nvmf_tgt_br" 00:22:02.863 13:06:43 -- nvmf/common.sh@154 -- # true 00:22:02.863 13:06:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:02.863 Cannot find device "nvmf_tgt_br2" 00:22:02.863 13:06:43 -- nvmf/common.sh@155 -- # true 00:22:02.863 13:06:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:02.863 13:06:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:02.863 Cannot find device "nvmf_tgt_br" 00:22:02.863 13:06:43 -- nvmf/common.sh@157 -- # true 00:22:02.863 13:06:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:02.863 Cannot find device "nvmf_tgt_br2" 00:22:02.863 13:06:43 -- nvmf/common.sh@158 -- # true 00:22:02.863 13:06:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:02.863 13:06:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:02.863 13:06:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:02.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.863 13:06:43 -- nvmf/common.sh@161 -- # true 00:22:02.863 13:06:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:02.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.863 13:06:43 -- nvmf/common.sh@162 -- # true 00:22:02.863 13:06:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:02.863 13:06:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:02.863 13:06:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:02.863 13:06:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:02.863 13:06:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:02.863 13:06:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:02.863 13:06:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:02.863 13:06:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:03.121 13:06:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:03.121 13:06:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:03.121 13:06:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:03.121 13:06:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:03.121 13:06:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:03.121 13:06:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:03.121 13:06:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:03.121 13:06:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:03.121 13:06:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:03.121 13:06:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:03.121 13:06:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:03.121 13:06:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:03.121 13:06:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:03.121 13:06:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:03.121 13:06:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:03.121 13:06:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:03.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:22:03.121 00:22:03.121 --- 10.0.0.2 ping statistics --- 00:22:03.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.121 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:22:03.121 13:06:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:03.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:03.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:22:03.121 00:22:03.121 --- 10.0.0.3 ping statistics --- 00:22:03.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.121 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:03.121 13:06:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:03.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:03.121 00:22:03.121 --- 10.0.0.1 ping statistics --- 00:22:03.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.122 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:03.122 13:06:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.122 13:06:43 -- nvmf/common.sh@421 -- # return 0 00:22:03.122 13:06:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:03.122 13:06:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.122 13:06:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:03.122 13:06:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:03.122 13:06:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.122 13:06:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:03.122 13:06:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:03.122 13:06:43 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:03.122 13:06:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:03.122 13:06:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.122 13:06:43 -- common/autotest_common.sh@10 -- # set +x 00:22:03.122 13:06:43 -- nvmf/common.sh@469 -- # nvmfpid=95917 00:22:03.122 13:06:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.122 13:06:43 -- nvmf/common.sh@470 -- # waitforlisten 95917 00:22:03.122 13:06:43 -- common/autotest_common.sh@829 -- # '[' -z 95917 ']' 00:22:03.122 13:06:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.122 13:06:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.122 13:06:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.122 13:06:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.122 13:06:43 -- common/autotest_common.sh@10 -- # set +x 00:22:03.122 [2024-12-13 13:06:43.840535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:03.122 [2024-12-13 13:06:43.840611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.380 [2024-12-13 13:06:43.970933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.380 [2024-12-13 13:06:44.055954] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:03.380 [2024-12-13 13:06:44.056147] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.380 [2024-12-13 13:06:44.056162] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.380 [2024-12-13 13:06:44.056171] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.380 [2024-12-13 13:06:44.056203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.316 13:06:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.316 13:06:44 -- common/autotest_common.sh@862 -- # return 0 00:22:04.316 13:06:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:04.316 13:06:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.316 13:06:44 -- common/autotest_common.sh@10 -- # set +x 00:22:04.316 13:06:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.316 13:06:44 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:04.316 13:06:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.316 13:06:44 -- common/autotest_common.sh@10 -- # set +x 00:22:04.316 [2024-12-13 13:06:44.916782] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.316 13:06:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.316 13:06:44 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:04.316 13:06:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.316 13:06:44 -- common/autotest_common.sh@10 -- # set +x 00:22:04.316 [2024-12-13 13:06:44.925056] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:04.316 13:06:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.316 13:06:44 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:04.316 13:06:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.316 13:06:44 -- common/autotest_common.sh@10 -- # set +x 00:22:04.316 null0 00:22:04.316 13:06:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.316 13:06:44 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:04.316 13:06:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.316 13:06:44 -- common/autotest_common.sh@10 -- # set +x 00:22:04.316 null1 00:22:04.316 13:06:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.316 13:06:44 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:04.316 13:06:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.316 13:06:44 -- common/autotest_common.sh@10 -- # set +x 00:22:04.316 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:04.316 13:06:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.316 13:06:44 -- host/discovery.sh@45 -- # hostpid=95973 00:22:04.316 13:06:44 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:04.316 13:06:44 -- host/discovery.sh@46 -- # waitforlisten 95973 /tmp/host.sock 00:22:04.316 13:06:44 -- common/autotest_common.sh@829 -- # '[' -z 95973 ']' 00:22:04.316 13:06:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:04.316 13:06:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.316 13:06:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:04.316 13:06:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.316 13:06:44 -- common/autotest_common.sh@10 -- # set +x 00:22:04.316 [2024-12-13 13:06:45.010248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:04.316 [2024-12-13 13:06:45.010537] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95973 ] 00:22:04.575 [2024-12-13 13:06:45.146075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.575 [2024-12-13 13:06:45.219063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:04.575 [2024-12-13 13:06:45.219568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.509 13:06:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.509 13:06:46 -- common/autotest_common.sh@862 -- # return 0 00:22:05.509 13:06:46 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.509 13:06:46 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:05.509 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.509 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.509 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.509 13:06:46 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:05.509 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.509 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.509 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.509 13:06:46 -- host/discovery.sh@72 -- # notify_id=0 00:22:05.509 13:06:46 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:05.509 13:06:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.510 13:06:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.510 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.510 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.510 13:06:46 -- host/discovery.sh@59 -- # sort 00:22:05.510 13:06:46 -- host/discovery.sh@59 -- # xargs 00:22:05.510 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.510 13:06:46 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:05.510 13:06:46 -- host/discovery.sh@79 -- # get_bdev_list 00:22:05.510 13:06:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.510 13:06:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.510 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.510 13:06:46 -- host/discovery.sh@55 -- # sort 00:22:05.510 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.510 13:06:46 -- host/discovery.sh@55 -- # xargs 00:22:05.510 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.510 13:06:46 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:05.510 13:06:46 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:05.510 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.510 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.510 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.510 13:06:46 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:05.510 13:06:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.510 13:06:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.510 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.510 13:06:46 -- host/discovery.sh@59 -- # sort 00:22:05.510 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.510 13:06:46 -- host/discovery.sh@59 -- # xargs 00:22:05.510 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.510 13:06:46 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:05.510 13:06:46 -- host/discovery.sh@83 -- # get_bdev_list 00:22:05.510 13:06:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.510 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.510 13:06:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.510 13:06:46 -- host/discovery.sh@55 -- # sort 00:22:05.510 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.510 13:06:46 -- host/discovery.sh@55 -- # xargs 00:22:05.510 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.769 13:06:46 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:05.769 13:06:46 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:05.769 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.769 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.769 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.769 13:06:46 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:05.769 13:06:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.769 13:06:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.769 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.769 13:06:46 -- host/discovery.sh@59 -- # sort 00:22:05.769 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.769 13:06:46 -- host/discovery.sh@59 -- # xargs 00:22:05.769 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.769 13:06:46 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:05.769 13:06:46 -- host/discovery.sh@87 -- # get_bdev_list 00:22:05.769 13:06:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.769 13:06:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.769 13:06:46 -- host/discovery.sh@55 -- # sort 00:22:05.769 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.769 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.769 13:06:46 -- host/discovery.sh@55 -- # xargs 00:22:05.769 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.769 13:06:46 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:05.769 13:06:46 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:05.769 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.769 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.769 [2024-12-13 13:06:46.413415] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.769 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.769 13:06:46 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:05.769 13:06:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.769 13:06:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.769 13:06:46 -- host/discovery.sh@59 -- # sort 00:22:05.769 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.769 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.769 13:06:46 -- host/discovery.sh@59 -- # xargs 00:22:05.769 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.769 13:06:46 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:05.769 13:06:46 -- host/discovery.sh@93 -- # get_bdev_list 00:22:05.769 13:06:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.769 13:06:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.769 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.769 13:06:46 -- host/discovery.sh@55 -- # sort 00:22:05.769 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.769 13:06:46 -- host/discovery.sh@55 -- # xargs 00:22:05.769 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.769 13:06:46 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:05.769 13:06:46 -- host/discovery.sh@94 -- # get_notification_count 00:22:05.769 13:06:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:05.769 13:06:46 -- host/discovery.sh@74 -- # jq '. | length' 00:22:05.769 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.769 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:05.769 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.028 13:06:46 -- host/discovery.sh@74 -- # notification_count=0 00:22:06.028 13:06:46 -- host/discovery.sh@75 -- # notify_id=0 00:22:06.028 13:06:46 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:06.028 13:06:46 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:06.028 13:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.028 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:06.028 13:06:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.028 13:06:46 -- host/discovery.sh@100 -- # sleep 1 00:22:06.286 [2024-12-13 13:06:47.060823] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:06.286 [2024-12-13 13:06:47.060870] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:06.286 [2024-12-13 13:06:47.060888] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:06.545 [2024-12-13 13:06:47.146949] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:06.545 [2024-12-13 13:06:47.202476] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:06.545 [2024-12-13 13:06:47.202540] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:07.113 13:06:47 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:07.113 13:06:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:07.113 13:06:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:07.113 13:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.113 13:06:47 -- common/autotest_common.sh@10 -- # set +x 00:22:07.113 13:06:47 -- host/discovery.sh@59 -- # sort 00:22:07.113 13:06:47 -- host/discovery.sh@59 -- # xargs 00:22:07.113 13:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.113 13:06:47 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.113 13:06:47 -- host/discovery.sh@102 -- # get_bdev_list 00:22:07.113 13:06:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.113 13:06:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:07.113 13:06:47 -- host/discovery.sh@55 -- # sort 00:22:07.113 13:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.113 13:06:47 -- host/discovery.sh@55 -- # xargs 00:22:07.113 13:06:47 -- common/autotest_common.sh@10 -- # set +x 00:22:07.113 13:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.113 13:06:47 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:07.113 13:06:47 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:07.113 13:06:47 -- host/discovery.sh@63 -- # sort -n 00:22:07.113 13:06:47 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:07.113 13:06:47 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:07.113 13:06:47 -- host/discovery.sh@63 -- # xargs 00:22:07.113 13:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.113 13:06:47 -- common/autotest_common.sh@10 -- # set +x 00:22:07.113 13:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.113 13:06:47 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:07.113 13:06:47 -- host/discovery.sh@104 -- # get_notification_count 00:22:07.113 13:06:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:07.113 13:06:47 -- host/discovery.sh@74 -- # jq '. | length' 00:22:07.113 13:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.113 13:06:47 -- common/autotest_common.sh@10 -- # set +x 00:22:07.113 13:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.113 13:06:47 -- host/discovery.sh@74 -- # notification_count=1 00:22:07.113 13:06:47 -- host/discovery.sh@75 -- # notify_id=1 00:22:07.113 13:06:47 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:07.113 13:06:47 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:07.113 13:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.113 13:06:47 -- common/autotest_common.sh@10 -- # set +x 00:22:07.113 13:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.113 13:06:47 -- host/discovery.sh@109 -- # sleep 1 00:22:08.049 13:06:48 -- host/discovery.sh@110 -- # get_bdev_list 00:22:08.049 13:06:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.049 13:06:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.049 13:06:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:08.049 13:06:48 -- common/autotest_common.sh@10 -- # set +x 00:22:08.049 13:06:48 -- host/discovery.sh@55 -- # sort 00:22:08.049 13:06:48 -- host/discovery.sh@55 -- # xargs 00:22:08.308 13:06:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.308 13:06:48 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:08.308 13:06:48 -- host/discovery.sh@111 -- # get_notification_count 00:22:08.308 13:06:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:08.308 13:06:48 -- host/discovery.sh@74 -- # jq '. | length' 00:22:08.308 13:06:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.308 13:06:48 -- common/autotest_common.sh@10 -- # set +x 00:22:08.308 13:06:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.308 13:06:48 -- host/discovery.sh@74 -- # notification_count=1 00:22:08.308 13:06:48 -- host/discovery.sh@75 -- # notify_id=2 00:22:08.308 13:06:48 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:08.308 13:06:48 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:08.308 13:06:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.308 13:06:48 -- common/autotest_common.sh@10 -- # set +x 00:22:08.308 [2024-12-13 13:06:48.926635] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:08.308 [2024-12-13 13:06:48.927015] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:08.308 [2024-12-13 13:06:48.927043] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:08.308 13:06:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.308 13:06:48 -- host/discovery.sh@117 -- # sleep 1 00:22:08.308 [2024-12-13 13:06:49.013042] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:08.308 [2024-12-13 13:06:49.073355] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:08.308 [2024-12-13 13:06:49.073377] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:08.308 [2024-12-13 13:06:49.073399] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:09.244 13:06:49 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:09.244 13:06:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.244 13:06:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.244 13:06:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.244 13:06:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.244 13:06:49 -- host/discovery.sh@59 -- # sort 00:22:09.244 13:06:49 -- host/discovery.sh@59 -- # xargs 00:22:09.244 13:06:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.244 13:06:49 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.244 13:06:49 -- host/discovery.sh@119 -- # get_bdev_list 00:22:09.244 13:06:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.244 13:06:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.244 13:06:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.244 13:06:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.244 13:06:49 -- host/discovery.sh@55 -- # sort 00:22:09.244 13:06:49 -- host/discovery.sh@55 -- # xargs 00:22:09.503 13:06:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.503 13:06:50 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:09.503 13:06:50 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:09.503 13:06:50 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:09.503 13:06:50 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:09.503 13:06:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.503 13:06:50 -- host/discovery.sh@63 -- # sort -n 00:22:09.503 13:06:50 -- common/autotest_common.sh@10 -- # set +x 00:22:09.503 13:06:50 -- host/discovery.sh@63 -- # xargs 00:22:09.503 13:06:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.503 13:06:50 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:09.503 13:06:50 -- host/discovery.sh@121 -- # get_notification_count 00:22:09.503 13:06:50 -- host/discovery.sh@74 -- # jq '. | length' 00:22:09.503 13:06:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:09.503 13:06:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.503 13:06:50 -- common/autotest_common.sh@10 -- # set +x 00:22:09.503 13:06:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.503 13:06:50 -- host/discovery.sh@74 -- # notification_count=0 00:22:09.503 13:06:50 -- host/discovery.sh@75 -- # notify_id=2 00:22:09.503 13:06:50 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:09.503 13:06:50 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:09.503 13:06:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.503 13:06:50 -- common/autotest_common.sh@10 -- # set +x 00:22:09.503 [2024-12-13 13:06:50.155498] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:09.503 [2024-12-13 13:06:50.155547] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:09.503 13:06:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.503 13:06:50 -- host/discovery.sh@127 -- # sleep 1 00:22:09.503 [2024-12-13 13:06:50.160494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.503 [2024-12-13 13:06:50.160526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.503 [2024-12-13 13:06:50.160571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.503 [2024-12-13 13:06:50.160580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.503 [2024-12-13 13:06:50.160606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.503 [2024-12-13 13:06:50.160615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.503 [2024-12-13 13:06:50.160624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.503 [2024-12-13 13:06:50.160633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.503 [2024-12-13 13:06:50.160642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cf0 is same with the state(5) to be set 00:22:09.503 [2024-12-13 13:06:50.170433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905cf0 (9): Bad file descriptor 00:22:09.503 [2024-12-13 13:06:50.180449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.503 [2024-12-13 13:06:50.180591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.180667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.180683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905cf0 with addr=10.0.0.2, port=4420 00:22:09.503 [2024-12-13 13:06:50.180693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cf0 is same with the state(5) to be set 00:22:09.503 [2024-12-13 13:06:50.180709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905cf0 (9): Bad file descriptor 00:22:09.503 [2024-12-13 13:06:50.180735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:09.503 [2024-12-13 13:06:50.180745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:09.503 [2024-12-13 13:06:50.180755] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:09.503 [2024-12-13 13:06:50.180769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:09.503 [2024-12-13 13:06:50.190529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.503 [2024-12-13 13:06:50.190632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.190672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.190686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905cf0 with addr=10.0.0.2, port=4420 00:22:09.503 [2024-12-13 13:06:50.190695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cf0 is same with the state(5) to be set 00:22:09.503 [2024-12-13 13:06:50.190709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905cf0 (9): Bad file descriptor 00:22:09.503 [2024-12-13 13:06:50.190722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:09.503 [2024-12-13 13:06:50.190730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:09.503 [2024-12-13 13:06:50.190737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:09.503 [2024-12-13 13:06:50.190802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:09.503 [2024-12-13 13:06:50.200604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.503 [2024-12-13 13:06:50.200729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.200783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.200799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905cf0 with addr=10.0.0.2, port=4420 00:22:09.503 [2024-12-13 13:06:50.200808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cf0 is same with the state(5) to be set 00:22:09.503 [2024-12-13 13:06:50.200822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905cf0 (9): Bad file descriptor 00:22:09.503 [2024-12-13 13:06:50.200862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:09.503 [2024-12-13 13:06:50.200873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:09.503 [2024-12-13 13:06:50.200897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:09.503 [2024-12-13 13:06:50.200926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:09.503 [2024-12-13 13:06:50.210689] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.503 [2024-12-13 13:06:50.210808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.210849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.210863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905cf0 with addr=10.0.0.2, port=4420 00:22:09.503 [2024-12-13 13:06:50.210872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cf0 is same with the state(5) to be set 00:22:09.503 [2024-12-13 13:06:50.210886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905cf0 (9): Bad file descriptor 00:22:09.503 [2024-12-13 13:06:50.210907] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:09.503 [2024-12-13 13:06:50.210916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:09.503 [2024-12-13 13:06:50.210924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:09.503 [2024-12-13 13:06:50.210936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:09.503 [2024-12-13 13:06:50.220756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.503 [2024-12-13 13:06:50.220856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.220896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.503 [2024-12-13 13:06:50.220910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905cf0 with addr=10.0.0.2, port=4420 00:22:09.504 [2024-12-13 13:06:50.220919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cf0 is same with the state(5) to be set 00:22:09.504 [2024-12-13 13:06:50.220933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905cf0 (9): Bad file descriptor 00:22:09.504 [2024-12-13 13:06:50.220954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:09.504 [2024-12-13 13:06:50.220963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:09.504 [2024-12-13 13:06:50.220971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:09.504 [2024-12-13 13:06:50.220983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:09.504 [2024-12-13 13:06:50.230813] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.504 [2024-12-13 13:06:50.230909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.504 [2024-12-13 13:06:50.230966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.504 [2024-12-13 13:06:50.230980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905cf0 with addr=10.0.0.2, port=4420 00:22:09.504 [2024-12-13 13:06:50.230989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cf0 is same with the state(5) to be set 00:22:09.504 [2024-12-13 13:06:50.231003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905cf0 (9): Bad file descriptor 00:22:09.504 [2024-12-13 13:06:50.231023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:09.504 [2024-12-13 13:06:50.231032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:09.504 [2024-12-13 13:06:50.231040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:09.504 [2024-12-13 13:06:50.231052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:09.504 [2024-12-13 13:06:50.240891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.504 [2024-12-13 13:06:50.241000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.504 [2024-12-13 13:06:50.241051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.504 [2024-12-13 13:06:50.241065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905cf0 with addr=10.0.0.2, port=4420 00:22:09.504 [2024-12-13 13:06:50.241073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cf0 is same with the state(5) to be set 00:22:09.504 [2024-12-13 13:06:50.241087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905cf0 (9): Bad file descriptor 00:22:09.504 [2024-12-13 13:06:50.241125] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:09.504 [2024-12-13 13:06:50.241143] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:09.504 [2024-12-13 13:06:50.241205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:09.504 [2024-12-13 13:06:50.241217] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:09.504 [2024-12-13 13:06:50.241225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:09.504 [2024-12-13 13:06:50.241240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.439 13:06:51 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:10.439 13:06:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:10.439 13:06:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.439 13:06:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:10.439 13:06:51 -- common/autotest_common.sh@10 -- # set +x 00:22:10.439 13:06:51 -- host/discovery.sh@59 -- # sort 00:22:10.439 13:06:51 -- host/discovery.sh@59 -- # xargs 00:22:10.439 13:06:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.698 13:06:51 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.698 13:06:51 -- host/discovery.sh@129 -- # get_bdev_list 00:22:10.698 13:06:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:10.698 13:06:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.698 13:06:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.698 13:06:51 -- common/autotest_common.sh@10 -- # set +x 00:22:10.698 13:06:51 -- host/discovery.sh@55 -- # sort 00:22:10.698 13:06:51 -- host/discovery.sh@55 -- # xargs 00:22:10.698 13:06:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.698 13:06:51 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:10.698 13:06:51 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:10.698 13:06:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:10.698 13:06:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.698 13:06:51 -- common/autotest_common.sh@10 -- # set +x 00:22:10.698 13:06:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:10.698 13:06:51 -- host/discovery.sh@63 -- # sort -n 00:22:10.698 13:06:51 -- host/discovery.sh@63 -- # xargs 00:22:10.698 13:06:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.698 13:06:51 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:10.698 13:06:51 -- host/discovery.sh@131 -- # get_notification_count 00:22:10.698 13:06:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:10.698 13:06:51 -- host/discovery.sh@74 -- # jq '. | length' 00:22:10.698 13:06:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.698 13:06:51 -- common/autotest_common.sh@10 -- # set +x 00:22:10.698 13:06:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.698 13:06:51 -- host/discovery.sh@74 -- # notification_count=0 00:22:10.698 13:06:51 -- host/discovery.sh@75 -- # notify_id=2 00:22:10.698 13:06:51 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:10.698 13:06:51 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:10.698 13:06:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.698 13:06:51 -- common/autotest_common.sh@10 -- # set +x 00:22:10.698 13:06:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.698 13:06:51 -- host/discovery.sh@135 -- # sleep 1 00:22:11.635 13:06:52 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:11.635 13:06:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.635 13:06:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.635 13:06:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.635 13:06:52 -- common/autotest_common.sh@10 -- # set +x 00:22:11.635 13:06:52 -- host/discovery.sh@59 -- # sort 00:22:11.635 13:06:52 -- host/discovery.sh@59 -- # xargs 00:22:11.893 13:06:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.893 13:06:52 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:11.893 13:06:52 -- host/discovery.sh@137 -- # get_bdev_list 00:22:11.893 13:06:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.893 13:06:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.893 13:06:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.893 13:06:52 -- common/autotest_common.sh@10 -- # set +x 00:22:11.893 13:06:52 -- host/discovery.sh@55 -- # sort 00:22:11.893 13:06:52 -- host/discovery.sh@55 -- # xargs 00:22:11.893 13:06:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.893 13:06:52 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:11.894 13:06:52 -- host/discovery.sh@138 -- # get_notification_count 00:22:11.894 13:06:52 -- host/discovery.sh@74 -- # jq '. | length' 00:22:11.894 13:06:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:11.894 13:06:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.894 13:06:52 -- common/autotest_common.sh@10 -- # set +x 00:22:11.894 13:06:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.894 13:06:52 -- host/discovery.sh@74 -- # notification_count=2 00:22:11.894 13:06:52 -- host/discovery.sh@75 -- # notify_id=4 00:22:11.894 13:06:52 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:11.894 13:06:52 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:11.894 13:06:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.894 13:06:52 -- common/autotest_common.sh@10 -- # set +x 00:22:12.830 [2024-12-13 13:06:53.581943] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:12.830 [2024-12-13 13:06:53.581968] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:12.830 [2024-12-13 13:06:53.581984] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:13.089 [2024-12-13 13:06:53.668068] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:13.089 [2024-12-13 13:06:53.727096] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:13.089 [2024-12-13 13:06:53.727171] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:13.089 13:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.089 13:06:53 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:13.089 13:06:53 -- common/autotest_common.sh@650 -- # local es=0 00:22:13.089 13:06:53 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:13.089 13:06:53 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:13.089 13:06:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.089 13:06:53 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:13.089 13:06:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.089 13:06:53 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:13.089 13:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.089 13:06:53 -- common/autotest_common.sh@10 -- # set +x 00:22:13.089 2024/12/13 13:06:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:13.089 request: 00:22:13.089 { 00:22:13.089 "method": "bdev_nvme_start_discovery", 00:22:13.089 "params": { 00:22:13.089 "name": "nvme", 00:22:13.089 "trtype": "tcp", 00:22:13.089 "traddr": "10.0.0.2", 00:22:13.089 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:13.089 "adrfam": "ipv4", 00:22:13.089 "trsvcid": "8009", 00:22:13.089 "wait_for_attach": true 00:22:13.089 } 00:22:13.089 } 00:22:13.089 Got JSON-RPC error response 00:22:13.089 GoRPCClient: error on JSON-RPC call 00:22:13.089 13:06:53 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:13.089 13:06:53 -- common/autotest_common.sh@653 -- # es=1 00:22:13.089 13:06:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:13.089 13:06:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:13.089 13:06:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:13.089 13:06:53 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:13.089 13:06:53 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:13.089 13:06:53 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:13.089 13:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.089 13:06:53 -- common/autotest_common.sh@10 -- # set +x 00:22:13.089 13:06:53 -- host/discovery.sh@67 -- # xargs 00:22:13.089 13:06:53 -- host/discovery.sh@67 -- # sort 00:22:13.089 13:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.089 13:06:53 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:13.089 13:06:53 -- host/discovery.sh@147 -- # get_bdev_list 00:22:13.089 13:06:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.089 13:06:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:13.089 13:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.089 13:06:53 -- common/autotest_common.sh@10 -- # set +x 00:22:13.089 13:06:53 -- host/discovery.sh@55 -- # sort 00:22:13.089 13:06:53 -- host/discovery.sh@55 -- # xargs 00:22:13.089 13:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.089 13:06:53 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:13.089 13:06:53 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:13.089 13:06:53 -- common/autotest_common.sh@650 -- # local es=0 00:22:13.089 13:06:53 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:13.089 13:06:53 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:13.089 13:06:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.089 13:06:53 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:13.348 13:06:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.348 13:06:53 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:13.348 13:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.348 13:06:53 -- common/autotest_common.sh@10 -- # set +x 00:22:13.348 2024/12/13 13:06:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:13.348 request: 00:22:13.348 { 00:22:13.348 "method": "bdev_nvme_start_discovery", 00:22:13.348 "params": { 00:22:13.348 "name": "nvme_second", 00:22:13.348 "trtype": "tcp", 00:22:13.348 "traddr": "10.0.0.2", 00:22:13.348 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:13.348 "adrfam": "ipv4", 00:22:13.348 "trsvcid": "8009", 00:22:13.348 "wait_for_attach": true 00:22:13.348 } 00:22:13.348 } 00:22:13.348 Got JSON-RPC error response 00:22:13.348 GoRPCClient: error on JSON-RPC call 00:22:13.348 13:06:53 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:13.348 13:06:53 -- common/autotest_common.sh@653 -- # es=1 00:22:13.348 13:06:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:13.348 13:06:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:13.348 13:06:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:13.348 13:06:53 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:13.348 13:06:53 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:13.348 13:06:53 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:13.348 13:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.348 13:06:53 -- common/autotest_common.sh@10 -- # set +x 00:22:13.348 13:06:53 -- host/discovery.sh@67 -- # xargs 00:22:13.348 13:06:53 -- host/discovery.sh@67 -- # sort 00:22:13.348 13:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.348 13:06:53 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:13.348 13:06:53 -- host/discovery.sh@153 -- # get_bdev_list 00:22:13.348 13:06:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.348 13:06:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:13.348 13:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.348 13:06:53 -- host/discovery.sh@55 -- # xargs 00:22:13.348 13:06:53 -- common/autotest_common.sh@10 -- # set +x 00:22:13.348 13:06:53 -- host/discovery.sh@55 -- # sort 00:22:13.348 13:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.348 13:06:53 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:13.348 13:06:53 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:13.348 13:06:53 -- common/autotest_common.sh@650 -- # local es=0 00:22:13.348 13:06:53 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:13.348 13:06:53 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:13.348 13:06:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.348 13:06:53 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:13.348 13:06:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.348 13:06:53 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:13.348 13:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.348 13:06:53 -- common/autotest_common.sh@10 -- # set +x 00:22:14.283 [2024-12-13 13:06:54.985248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.283 [2024-12-13 13:06:54.985331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.283 [2024-12-13 13:06:54.985348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905300 with addr=10.0.0.2, port=8010 00:22:14.283 [2024-12-13 13:06:54.985363] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:14.283 [2024-12-13 13:06:54.985371] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:14.283 [2024-12-13 13:06:54.985379] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:15.218 [2024-12-13 13:06:55.985221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.218 [2024-12-13 13:06:55.985315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.218 [2024-12-13 13:06:55.985332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905300 with addr=10.0.0.2, port=8010 00:22:15.218 [2024-12-13 13:06:55.985346] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:15.218 [2024-12-13 13:06:55.985354] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:15.218 [2024-12-13 13:06:55.985362] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:16.594 [2024-12-13 13:06:56.985151] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:16.594 2024/12/13 13:06:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:16.594 request: 00:22:16.594 { 00:22:16.594 "method": "bdev_nvme_start_discovery", 00:22:16.594 "params": { 00:22:16.594 "name": "nvme_second", 00:22:16.594 "trtype": "tcp", 00:22:16.594 "traddr": "10.0.0.2", 00:22:16.594 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:16.594 "adrfam": "ipv4", 00:22:16.594 "trsvcid": "8010", 00:22:16.594 "attach_timeout_ms": 3000 00:22:16.594 } 00:22:16.594 } 00:22:16.594 Got JSON-RPC error response 00:22:16.594 GoRPCClient: error on JSON-RPC call 00:22:16.594 13:06:56 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:16.594 13:06:56 -- common/autotest_common.sh@653 -- # es=1 00:22:16.594 13:06:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.594 13:06:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.594 13:06:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.594 13:06:56 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:16.594 13:06:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:16.594 13:06:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:16.594 13:06:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.594 13:06:56 -- common/autotest_common.sh@10 -- # set +x 00:22:16.594 13:06:56 -- host/discovery.sh@67 -- # sort 00:22:16.594 13:06:56 -- host/discovery.sh@67 -- # xargs 00:22:16.594 13:06:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.594 13:06:57 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:16.594 13:06:57 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:16.594 13:06:57 -- host/discovery.sh@162 -- # kill 95973 00:22:16.594 13:06:57 -- host/discovery.sh@163 -- # nvmftestfini 00:22:16.594 13:06:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:16.594 13:06:57 -- nvmf/common.sh@116 -- # sync 00:22:16.594 13:06:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:16.594 13:06:57 -- nvmf/common.sh@119 -- # set +e 00:22:16.594 13:06:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:16.594 13:06:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:16.594 rmmod nvme_tcp 00:22:16.594 rmmod nvme_fabrics 00:22:16.594 rmmod nvme_keyring 00:22:16.594 13:06:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:16.594 13:06:57 -- nvmf/common.sh@123 -- # set -e 00:22:16.594 13:06:57 -- nvmf/common.sh@124 -- # return 0 00:22:16.594 13:06:57 -- nvmf/common.sh@477 -- # '[' -n 95917 ']' 00:22:16.594 13:06:57 -- nvmf/common.sh@478 -- # killprocess 95917 00:22:16.594 13:06:57 -- common/autotest_common.sh@936 -- # '[' -z 95917 ']' 00:22:16.594 13:06:57 -- common/autotest_common.sh@940 -- # kill -0 95917 00:22:16.594 13:06:57 -- common/autotest_common.sh@941 -- # uname 00:22:16.594 13:06:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:16.594 13:06:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95917 00:22:16.594 13:06:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:16.594 13:06:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:16.594 killing process with pid 95917 00:22:16.595 13:06:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95917' 00:22:16.595 13:06:57 -- common/autotest_common.sh@955 -- # kill 95917 00:22:16.595 13:06:57 -- common/autotest_common.sh@960 -- # wait 95917 00:22:16.869 13:06:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:16.869 13:06:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:16.869 13:06:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:16.869 13:06:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.869 13:06:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:16.869 13:06:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.869 13:06:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.869 13:06:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.869 13:06:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:16.869 00:22:16.869 real 0m14.384s 00:22:16.869 user 0m27.831s 00:22:16.869 sys 0m1.764s 00:22:16.869 13:06:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:16.869 13:06:57 -- common/autotest_common.sh@10 -- # set +x 00:22:16.869 ************************************ 00:22:16.869 END TEST nvmf_discovery 00:22:16.869 ************************************ 00:22:16.869 13:06:57 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:16.869 13:06:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:16.869 13:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.869 13:06:57 -- common/autotest_common.sh@10 -- # set +x 00:22:16.869 ************************************ 00:22:16.869 START TEST nvmf_discovery_remove_ifc 00:22:16.869 ************************************ 00:22:16.869 13:06:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:17.141 * Looking for test storage... 00:22:17.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:17.141 13:06:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:17.141 13:06:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:17.141 13:06:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:17.141 13:06:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:17.141 13:06:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:17.141 13:06:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:17.141 13:06:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:17.141 13:06:57 -- scripts/common.sh@335 -- # IFS=.-: 00:22:17.141 13:06:57 -- scripts/common.sh@335 -- # read -ra ver1 00:22:17.141 13:06:57 -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.141 13:06:57 -- scripts/common.sh@336 -- # read -ra ver2 00:22:17.141 13:06:57 -- scripts/common.sh@337 -- # local 'op=<' 00:22:17.141 13:06:57 -- scripts/common.sh@339 -- # ver1_l=2 00:22:17.141 13:06:57 -- scripts/common.sh@340 -- # ver2_l=1 00:22:17.141 13:06:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:17.141 13:06:57 -- scripts/common.sh@343 -- # case "$op" in 00:22:17.141 13:06:57 -- scripts/common.sh@344 -- # : 1 00:22:17.141 13:06:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:17.141 13:06:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.141 13:06:57 -- scripts/common.sh@364 -- # decimal 1 00:22:17.141 13:06:57 -- scripts/common.sh@352 -- # local d=1 00:22:17.141 13:06:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.141 13:06:57 -- scripts/common.sh@354 -- # echo 1 00:22:17.141 13:06:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:17.141 13:06:57 -- scripts/common.sh@365 -- # decimal 2 00:22:17.141 13:06:57 -- scripts/common.sh@352 -- # local d=2 00:22:17.141 13:06:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.141 13:06:57 -- scripts/common.sh@354 -- # echo 2 00:22:17.141 13:06:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:17.141 13:06:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:17.141 13:06:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:17.141 13:06:57 -- scripts/common.sh@367 -- # return 0 00:22:17.141 13:06:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.141 13:06:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:17.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.141 --rc genhtml_branch_coverage=1 00:22:17.141 --rc genhtml_function_coverage=1 00:22:17.141 --rc genhtml_legend=1 00:22:17.141 --rc geninfo_all_blocks=1 00:22:17.141 --rc geninfo_unexecuted_blocks=1 00:22:17.141 00:22:17.141 ' 00:22:17.141 13:06:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:17.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.141 --rc genhtml_branch_coverage=1 00:22:17.141 --rc genhtml_function_coverage=1 00:22:17.141 --rc genhtml_legend=1 00:22:17.141 --rc geninfo_all_blocks=1 00:22:17.141 --rc geninfo_unexecuted_blocks=1 00:22:17.141 00:22:17.141 ' 00:22:17.141 13:06:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:17.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.141 --rc genhtml_branch_coverage=1 00:22:17.141 --rc genhtml_function_coverage=1 00:22:17.141 --rc genhtml_legend=1 00:22:17.141 --rc geninfo_all_blocks=1 00:22:17.141 --rc geninfo_unexecuted_blocks=1 00:22:17.141 00:22:17.141 ' 00:22:17.141 13:06:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:17.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.141 --rc genhtml_branch_coverage=1 00:22:17.141 --rc genhtml_function_coverage=1 00:22:17.141 --rc genhtml_legend=1 00:22:17.141 --rc geninfo_all_blocks=1 00:22:17.141 --rc geninfo_unexecuted_blocks=1 00:22:17.141 00:22:17.141 ' 00:22:17.141 13:06:57 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:17.141 13:06:57 -- nvmf/common.sh@7 -- # uname -s 00:22:17.141 13:06:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.141 13:06:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.141 13:06:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.141 13:06:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.141 13:06:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.141 13:06:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.141 13:06:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.141 13:06:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.141 13:06:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.141 13:06:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.141 13:06:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:22:17.141 13:06:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:22:17.141 13:06:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.141 13:06:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.141 13:06:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:17.141 13:06:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:17.141 13:06:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.141 13:06:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.141 13:06:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.141 13:06:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.141 13:06:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.141 13:06:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.141 13:06:57 -- paths/export.sh@5 -- # export PATH 00:22:17.141 13:06:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.141 13:06:57 -- nvmf/common.sh@46 -- # : 0 00:22:17.141 13:06:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:17.141 13:06:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:17.141 13:06:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:17.141 13:06:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.141 13:06:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.141 13:06:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:17.141 13:06:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:17.141 13:06:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:17.141 13:06:57 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:17.141 13:06:57 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:17.141 13:06:57 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:17.141 13:06:57 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:17.141 13:06:57 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:17.141 13:06:57 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:17.141 13:06:57 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:17.141 13:06:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:17.141 13:06:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.141 13:06:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:17.141 13:06:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:17.141 13:06:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:17.141 13:06:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.141 13:06:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.141 13:06:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.141 13:06:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:17.141 13:06:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:17.141 13:06:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:17.141 13:06:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:17.141 13:06:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:17.141 13:06:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:17.141 13:06:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.141 13:06:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.142 13:06:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:17.142 13:06:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:17.142 13:06:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:17.142 13:06:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:17.142 13:06:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:17.142 13:06:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.142 13:06:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:17.142 13:06:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:17.142 13:06:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:17.142 13:06:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:17.142 13:06:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:17.142 13:06:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:17.142 Cannot find device "nvmf_tgt_br" 00:22:17.142 13:06:57 -- nvmf/common.sh@154 -- # true 00:22:17.142 13:06:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:17.142 Cannot find device "nvmf_tgt_br2" 00:22:17.142 13:06:57 -- nvmf/common.sh@155 -- # true 00:22:17.142 13:06:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:17.142 13:06:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:17.142 Cannot find device "nvmf_tgt_br" 00:22:17.142 13:06:57 -- nvmf/common.sh@157 -- # true 00:22:17.142 13:06:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:17.142 Cannot find device "nvmf_tgt_br2" 00:22:17.142 13:06:57 -- nvmf/common.sh@158 -- # true 00:22:17.142 13:06:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:17.401 13:06:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:17.401 13:06:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:17.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:17.401 13:06:57 -- nvmf/common.sh@161 -- # true 00:22:17.401 13:06:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:17.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:17.401 13:06:57 -- nvmf/common.sh@162 -- # true 00:22:17.401 13:06:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:17.401 13:06:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:17.401 13:06:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:17.401 13:06:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:17.401 13:06:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:17.401 13:06:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:17.401 13:06:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:17.401 13:06:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:17.401 13:06:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:17.401 13:06:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:17.401 13:06:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:17.401 13:06:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:17.401 13:06:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:17.401 13:06:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:17.401 13:06:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:17.401 13:06:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:17.401 13:06:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:17.401 13:06:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:17.401 13:06:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:17.401 13:06:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:17.401 13:06:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:17.401 13:06:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:17.401 13:06:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:17.401 13:06:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:17.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:22:17.401 00:22:17.401 --- 10.0.0.2 ping statistics --- 00:22:17.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.401 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:22:17.401 13:06:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:17.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:17.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:22:17.401 00:22:17.401 --- 10.0.0.3 ping statistics --- 00:22:17.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.401 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:22:17.401 13:06:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:17.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:17.401 00:22:17.401 --- 10.0.0.1 ping statistics --- 00:22:17.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.401 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:17.401 13:06:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.401 13:06:58 -- nvmf/common.sh@421 -- # return 0 00:22:17.401 13:06:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:17.401 13:06:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.401 13:06:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:17.401 13:06:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:17.401 13:06:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.401 13:06:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:17.401 13:06:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:17.401 13:06:58 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:17.401 13:06:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:17.401 13:06:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.401 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:22:17.401 13:06:58 -- nvmf/common.sh@469 -- # nvmfpid=96486 00:22:17.401 13:06:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:17.401 13:06:58 -- nvmf/common.sh@470 -- # waitforlisten 96486 00:22:17.401 13:06:58 -- common/autotest_common.sh@829 -- # '[' -z 96486 ']' 00:22:17.401 13:06:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.401 13:06:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.401 13:06:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.401 13:06:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.401 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:22:17.670 [2024-12-13 13:06:58.222926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:17.670 [2024-12-13 13:06:58.223522] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.670 [2024-12-13 13:06:58.360905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.670 [2024-12-13 13:06:58.444058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:17.670 [2024-12-13 13:06:58.444297] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.670 [2024-12-13 13:06:58.444318] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.670 [2024-12-13 13:06:58.444328] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.670 [2024-12-13 13:06:58.444406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.607 13:06:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.607 13:06:59 -- common/autotest_common.sh@862 -- # return 0 00:22:18.607 13:06:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:18.607 13:06:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.607 13:06:59 -- common/autotest_common.sh@10 -- # set +x 00:22:18.607 13:06:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.607 13:06:59 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:18.607 13:06:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.607 13:06:59 -- common/autotest_common.sh@10 -- # set +x 00:22:18.607 [2024-12-13 13:06:59.278675] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.607 [2024-12-13 13:06:59.286888] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:18.608 null0 00:22:18.608 [2024-12-13 13:06:59.318802] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.608 13:06:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.608 13:06:59 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96541 00:22:18.608 13:06:59 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:18.608 13:06:59 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96541 /tmp/host.sock 00:22:18.608 13:06:59 -- common/autotest_common.sh@829 -- # '[' -z 96541 ']' 00:22:18.608 13:06:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:18.608 13:06:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.608 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:18.608 13:06:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:18.608 13:06:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.608 13:06:59 -- common/autotest_common.sh@10 -- # set +x 00:22:18.866 [2024-12-13 13:06:59.393490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:18.866 [2024-12-13 13:06:59.393584] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96541 ] 00:22:18.866 [2024-12-13 13:06:59.534469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.866 [2024-12-13 13:06:59.613832] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:18.866 [2024-12-13 13:06:59.614032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.803 13:07:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.803 13:07:00 -- common/autotest_common.sh@862 -- # return 0 00:22:19.803 13:07:00 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.803 13:07:00 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:19.803 13:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.803 13:07:00 -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 13:07:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.803 13:07:00 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:19.803 13:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.803 13:07:00 -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 13:07:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.803 13:07:00 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:19.803 13:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.803 13:07:00 -- common/autotest_common.sh@10 -- # set +x 00:22:20.737 [2024-12-13 13:07:01.492588] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:20.738 [2024-12-13 13:07:01.492617] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:20.738 [2024-12-13 13:07:01.492632] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:20.996 [2024-12-13 13:07:01.578697] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:20.996 [2024-12-13 13:07:01.634188] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:20.996 [2024-12-13 13:07:01.634250] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:20.996 [2024-12-13 13:07:01.634275] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:20.996 [2024-12-13 13:07:01.634289] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:20.996 [2024-12-13 13:07:01.634310] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:20.996 13:07:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:20.996 [2024-12-13 13:07:01.640703] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18be6c0 was disconnected and freed. delete nvme_qpair. 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:20.996 13:07:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.996 13:07:01 -- common/autotest_common.sh@10 -- # set +x 00:22:20.996 13:07:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.996 13:07:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.996 13:07:01 -- common/autotest_common.sh@10 -- # set +x 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:20.996 13:07:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:20.996 13:07:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:22.372 13:07:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:22.372 13:07:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.372 13:07:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:22.372 13:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.372 13:07:02 -- common/autotest_common.sh@10 -- # set +x 00:22:22.372 13:07:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:22.372 13:07:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:22.372 13:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.372 13:07:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:22.372 13:07:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:23.307 13:07:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:23.307 13:07:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.307 13:07:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:23.307 13:07:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.307 13:07:03 -- common/autotest_common.sh@10 -- # set +x 00:22:23.307 13:07:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:23.307 13:07:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:23.307 13:07:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.307 13:07:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:23.307 13:07:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:24.245 13:07:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.245 13:07:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.245 13:07:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.245 13:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.245 13:07:04 -- common/autotest_common.sh@10 -- # set +x 00:22:24.245 13:07:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.245 13:07:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.245 13:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.245 13:07:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:24.245 13:07:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:25.190 13:07:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:25.190 13:07:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.190 13:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.190 13:07:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:25.190 13:07:05 -- common/autotest_common.sh@10 -- # set +x 00:22:25.190 13:07:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:25.190 13:07:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:25.448 13:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.448 13:07:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:25.448 13:07:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:26.385 13:07:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:26.385 13:07:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.385 13:07:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:26.385 13:07:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.385 13:07:07 -- common/autotest_common.sh@10 -- # set +x 00:22:26.385 13:07:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:26.385 13:07:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:26.385 13:07:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.385 [2024-12-13 13:07:07.062464] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:26.385 [2024-12-13 13:07:07.062529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.385 [2024-12-13 13:07:07.062542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.385 [2024-12-13 13:07:07.062553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.385 [2024-12-13 13:07:07.062562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.385 [2024-12-13 13:07:07.062570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.385 [2024-12-13 13:07:07.062578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.385 [2024-12-13 13:07:07.062587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.385 [2024-12-13 13:07:07.062595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.385 [2024-12-13 13:07:07.062604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.385 [2024-12-13 13:07:07.062612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.385 [2024-12-13 13:07:07.062620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189a4b0 is same with the state(5) to be set 00:22:26.385 [2024-12-13 13:07:07.072461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189a4b0 (9): Bad file descriptor 00:22:26.385 13:07:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:26.385 13:07:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:26.385 [2024-12-13 13:07:07.082480] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:27.320 13:07:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:27.320 13:07:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.320 13:07:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:27.320 13:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.320 13:07:08 -- common/autotest_common.sh@10 -- # set +x 00:22:27.320 13:07:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:27.320 13:07:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:27.579 [2024-12-13 13:07:08.107867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:28.514 [2024-12-13 13:07:09.131867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:28.514 [2024-12-13 13:07:09.131969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189a4b0 with addr=10.0.0.2, port=4420 00:22:28.514 [2024-12-13 13:07:09.132002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189a4b0 is same with the state(5) to be set 00:22:28.514 [2024-12-13 13:07:09.132050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:28.514 [2024-12-13 13:07:09.132070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:28.514 [2024-12-13 13:07:09.132095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:28.515 [2024-12-13 13:07:09.132114] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:28.515 [2024-12-13 13:07:09.132919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189a4b0 (9): Bad file descriptor 00:22:28.515 [2024-12-13 13:07:09.133016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.515 [2024-12-13 13:07:09.133063] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:28.515 [2024-12-13 13:07:09.133125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.515 [2024-12-13 13:07:09.133168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.515 [2024-12-13 13:07:09.133195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.515 [2024-12-13 13:07:09.133215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.515 [2024-12-13 13:07:09.133235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.515 [2024-12-13 13:07:09.133254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.515 [2024-12-13 13:07:09.133277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.515 [2024-12-13 13:07:09.133296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.515 [2024-12-13 13:07:09.133316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.515 [2024-12-13 13:07:09.133335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.515 [2024-12-13 13:07:09.133367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:28.515 [2024-12-13 13:07:09.133428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18858f0 (9): Bad file descriptor 00:22:28.515 [2024-12-13 13:07:09.134433] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:28.515 [2024-12-13 13:07:09.134485] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:28.515 13:07:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.515 13:07:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:28.515 13:07:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:29.450 13:07:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.450 13:07:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.450 13:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.450 13:07:10 -- common/autotest_common.sh@10 -- # set +x 00:22:29.450 13:07:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.450 13:07:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.450 13:07:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.450 13:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.450 13:07:10 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:29.451 13:07:10 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:29.709 13:07:10 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:29.709 13:07:10 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:29.709 13:07:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.709 13:07:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.709 13:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.709 13:07:10 -- common/autotest_common.sh@10 -- # set +x 00:22:29.709 13:07:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.709 13:07:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.709 13:07:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.709 13:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.709 13:07:10 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:29.709 13:07:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:30.646 [2024-12-13 13:07:11.143491] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:30.646 [2024-12-13 13:07:11.143686] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:30.646 [2024-12-13 13:07:11.143744] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:30.646 [2024-12-13 13:07:11.229613] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:30.646 [2024-12-13 13:07:11.284522] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:30.646 [2024-12-13 13:07:11.284563] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:30.646 [2024-12-13 13:07:11.284585] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:30.646 [2024-12-13 13:07:11.284599] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:30.646 [2024-12-13 13:07:11.284606] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:30.646 [2024-12-13 13:07:11.292146] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18c9330 was disconnected and freed. delete nvme_qpair. 00:22:30.646 13:07:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.646 13:07:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.646 13:07:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.646 13:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.646 13:07:11 -- common/autotest_common.sh@10 -- # set +x 00:22:30.646 13:07:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.646 13:07:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.646 13:07:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.646 13:07:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:30.646 13:07:11 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:30.646 13:07:11 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96541 00:22:30.646 13:07:11 -- common/autotest_common.sh@936 -- # '[' -z 96541 ']' 00:22:30.646 13:07:11 -- common/autotest_common.sh@940 -- # kill -0 96541 00:22:30.646 13:07:11 -- common/autotest_common.sh@941 -- # uname 00:22:30.646 13:07:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.646 13:07:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96541 00:22:30.646 killing process with pid 96541 00:22:30.646 13:07:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:30.646 13:07:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:30.646 13:07:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96541' 00:22:30.646 13:07:11 -- common/autotest_common.sh@955 -- # kill 96541 00:22:30.646 13:07:11 -- common/autotest_common.sh@960 -- # wait 96541 00:22:30.905 13:07:11 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:30.905 13:07:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:30.905 13:07:11 -- nvmf/common.sh@116 -- # sync 00:22:30.905 13:07:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:30.905 13:07:11 -- nvmf/common.sh@119 -- # set +e 00:22:30.905 13:07:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:30.905 13:07:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:30.905 rmmod nvme_tcp 00:22:30.905 rmmod nvme_fabrics 00:22:30.905 rmmod nvme_keyring 00:22:31.164 13:07:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:31.164 13:07:11 -- nvmf/common.sh@123 -- # set -e 00:22:31.164 13:07:11 -- nvmf/common.sh@124 -- # return 0 00:22:31.164 13:07:11 -- nvmf/common.sh@477 -- # '[' -n 96486 ']' 00:22:31.164 13:07:11 -- nvmf/common.sh@478 -- # killprocess 96486 00:22:31.164 13:07:11 -- common/autotest_common.sh@936 -- # '[' -z 96486 ']' 00:22:31.164 13:07:11 -- common/autotest_common.sh@940 -- # kill -0 96486 00:22:31.164 13:07:11 -- common/autotest_common.sh@941 -- # uname 00:22:31.164 13:07:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.164 13:07:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96486 00:22:31.164 killing process with pid 96486 00:22:31.164 13:07:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:31.164 13:07:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:31.164 13:07:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96486' 00:22:31.164 13:07:11 -- common/autotest_common.sh@955 -- # kill 96486 00:22:31.164 13:07:11 -- common/autotest_common.sh@960 -- # wait 96486 00:22:31.164 13:07:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:31.164 13:07:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:31.164 13:07:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:31.164 13:07:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.164 13:07:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:31.164 13:07:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.164 13:07:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.164 13:07:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.423 13:07:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:31.423 ************************************ 00:22:31.423 END TEST nvmf_discovery_remove_ifc 00:22:31.423 ************************************ 00:22:31.423 00:22:31.423 real 0m14.349s 00:22:31.423 user 0m24.651s 00:22:31.423 sys 0m1.593s 00:22:31.423 13:07:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:31.423 13:07:11 -- common/autotest_common.sh@10 -- # set +x 00:22:31.423 13:07:11 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:31.423 13:07:11 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:31.423 13:07:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:31.423 13:07:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:31.423 13:07:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.423 ************************************ 00:22:31.423 START TEST nvmf_digest 00:22:31.423 ************************************ 00:22:31.423 13:07:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:31.423 * Looking for test storage... 00:22:31.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:31.423 13:07:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:31.423 13:07:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:31.423 13:07:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:31.423 13:07:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:31.423 13:07:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:31.423 13:07:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:31.423 13:07:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:31.423 13:07:12 -- scripts/common.sh@335 -- # IFS=.-: 00:22:31.423 13:07:12 -- scripts/common.sh@335 -- # read -ra ver1 00:22:31.423 13:07:12 -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.423 13:07:12 -- scripts/common.sh@336 -- # read -ra ver2 00:22:31.423 13:07:12 -- scripts/common.sh@337 -- # local 'op=<' 00:22:31.423 13:07:12 -- scripts/common.sh@339 -- # ver1_l=2 00:22:31.423 13:07:12 -- scripts/common.sh@340 -- # ver2_l=1 00:22:31.423 13:07:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:31.423 13:07:12 -- scripts/common.sh@343 -- # case "$op" in 00:22:31.423 13:07:12 -- scripts/common.sh@344 -- # : 1 00:22:31.423 13:07:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:31.423 13:07:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.423 13:07:12 -- scripts/common.sh@364 -- # decimal 1 00:22:31.423 13:07:12 -- scripts/common.sh@352 -- # local d=1 00:22:31.423 13:07:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.423 13:07:12 -- scripts/common.sh@354 -- # echo 1 00:22:31.423 13:07:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:31.423 13:07:12 -- scripts/common.sh@365 -- # decimal 2 00:22:31.423 13:07:12 -- scripts/common.sh@352 -- # local d=2 00:22:31.423 13:07:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.423 13:07:12 -- scripts/common.sh@354 -- # echo 2 00:22:31.423 13:07:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:31.423 13:07:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:31.423 13:07:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:31.423 13:07:12 -- scripts/common.sh@367 -- # return 0 00:22:31.423 13:07:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.423 13:07:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.423 --rc genhtml_branch_coverage=1 00:22:31.423 --rc genhtml_function_coverage=1 00:22:31.423 --rc genhtml_legend=1 00:22:31.423 --rc geninfo_all_blocks=1 00:22:31.423 --rc geninfo_unexecuted_blocks=1 00:22:31.423 00:22:31.423 ' 00:22:31.423 13:07:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.423 --rc genhtml_branch_coverage=1 00:22:31.423 --rc genhtml_function_coverage=1 00:22:31.423 --rc genhtml_legend=1 00:22:31.423 --rc geninfo_all_blocks=1 00:22:31.423 --rc geninfo_unexecuted_blocks=1 00:22:31.423 00:22:31.423 ' 00:22:31.423 13:07:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.423 --rc genhtml_branch_coverage=1 00:22:31.423 --rc genhtml_function_coverage=1 00:22:31.423 --rc genhtml_legend=1 00:22:31.423 --rc geninfo_all_blocks=1 00:22:31.423 --rc geninfo_unexecuted_blocks=1 00:22:31.423 00:22:31.423 ' 00:22:31.423 13:07:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.423 --rc genhtml_branch_coverage=1 00:22:31.423 --rc genhtml_function_coverage=1 00:22:31.423 --rc genhtml_legend=1 00:22:31.423 --rc geninfo_all_blocks=1 00:22:31.423 --rc geninfo_unexecuted_blocks=1 00:22:31.423 00:22:31.423 ' 00:22:31.423 13:07:12 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:31.423 13:07:12 -- nvmf/common.sh@7 -- # uname -s 00:22:31.682 13:07:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.682 13:07:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.682 13:07:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.682 13:07:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.682 13:07:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.682 13:07:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.682 13:07:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.682 13:07:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.682 13:07:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.682 13:07:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.682 13:07:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:22:31.682 13:07:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:22:31.682 13:07:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.682 13:07:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.682 13:07:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:31.682 13:07:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:31.682 13:07:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.682 13:07:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.682 13:07:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.682 13:07:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.682 13:07:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.682 13:07:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.682 13:07:12 -- paths/export.sh@5 -- # export PATH 00:22:31.682 13:07:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.682 13:07:12 -- nvmf/common.sh@46 -- # : 0 00:22:31.682 13:07:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:31.682 13:07:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:31.682 13:07:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:31.682 13:07:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.682 13:07:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.682 13:07:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:31.682 13:07:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:31.682 13:07:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:31.682 13:07:12 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:31.682 13:07:12 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:31.682 13:07:12 -- host/digest.sh@16 -- # runtime=2 00:22:31.682 13:07:12 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:31.682 13:07:12 -- host/digest.sh@132 -- # nvmftestinit 00:22:31.682 13:07:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:31.682 13:07:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.682 13:07:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:31.682 13:07:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:31.682 13:07:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:31.682 13:07:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.682 13:07:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.682 13:07:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.682 13:07:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:31.682 13:07:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:31.682 13:07:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:31.682 13:07:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:31.682 13:07:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:31.682 13:07:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:31.682 13:07:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.682 13:07:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.682 13:07:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:31.682 13:07:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:31.682 13:07:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:31.682 13:07:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:31.682 13:07:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:31.682 13:07:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.682 13:07:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:31.682 13:07:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:31.682 13:07:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:31.682 13:07:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:31.682 13:07:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:31.682 13:07:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:31.682 Cannot find device "nvmf_tgt_br" 00:22:31.682 13:07:12 -- nvmf/common.sh@154 -- # true 00:22:31.682 13:07:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:31.682 Cannot find device "nvmf_tgt_br2" 00:22:31.682 13:07:12 -- nvmf/common.sh@155 -- # true 00:22:31.682 13:07:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:31.682 13:07:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:31.682 Cannot find device "nvmf_tgt_br" 00:22:31.682 13:07:12 -- nvmf/common.sh@157 -- # true 00:22:31.682 13:07:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:31.682 Cannot find device "nvmf_tgt_br2" 00:22:31.682 13:07:12 -- nvmf/common.sh@158 -- # true 00:22:31.682 13:07:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:31.682 13:07:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:31.682 13:07:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:31.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.683 13:07:12 -- nvmf/common.sh@161 -- # true 00:22:31.683 13:07:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:31.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.683 13:07:12 -- nvmf/common.sh@162 -- # true 00:22:31.683 13:07:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:31.683 13:07:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:31.683 13:07:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:31.683 13:07:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:31.683 13:07:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:31.683 13:07:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:31.683 13:07:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:31.683 13:07:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:31.683 13:07:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:31.683 13:07:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:31.683 13:07:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:31.683 13:07:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:31.683 13:07:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:31.683 13:07:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:31.683 13:07:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:31.683 13:07:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:31.941 13:07:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:31.941 13:07:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:31.942 13:07:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:31.942 13:07:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:31.942 13:07:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:31.942 13:07:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:31.942 13:07:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:31.942 13:07:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:31.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:22:31.942 00:22:31.942 --- 10.0.0.2 ping statistics --- 00:22:31.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.942 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:31.942 13:07:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:31.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:31.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:22:31.942 00:22:31.942 --- 10.0.0.3 ping statistics --- 00:22:31.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.942 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:31.942 13:07:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:31.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:31.942 00:22:31.942 --- 10.0.0.1 ping statistics --- 00:22:31.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.942 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:31.942 13:07:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.942 13:07:12 -- nvmf/common.sh@421 -- # return 0 00:22:31.942 13:07:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:31.942 13:07:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.942 13:07:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:31.942 13:07:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:31.942 13:07:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.942 13:07:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:31.942 13:07:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:31.942 13:07:12 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:31.942 13:07:12 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:31.942 13:07:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:31.942 13:07:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:31.942 13:07:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.942 ************************************ 00:22:31.942 START TEST nvmf_digest_clean 00:22:31.942 ************************************ 00:22:31.942 13:07:12 -- common/autotest_common.sh@1114 -- # run_digest 00:22:31.942 13:07:12 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:31.942 13:07:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:31.942 13:07:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.942 13:07:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.942 13:07:12 -- nvmf/common.sh@469 -- # nvmfpid=96969 00:22:31.942 13:07:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:31.942 13:07:12 -- nvmf/common.sh@470 -- # waitforlisten 96969 00:22:31.942 13:07:12 -- common/autotest_common.sh@829 -- # '[' -z 96969 ']' 00:22:31.942 13:07:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.942 13:07:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.942 13:07:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.942 13:07:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.942 13:07:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.942 [2024-12-13 13:07:12.618189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:31.942 [2024-12-13 13:07:12.618509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.201 [2024-12-13 13:07:12.751530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.201 [2024-12-13 13:07:12.836472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:32.201 [2024-12-13 13:07:12.836634] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.201 [2024-12-13 13:07:12.836648] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.201 [2024-12-13 13:07:12.836657] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.201 [2024-12-13 13:07:12.836686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.201 13:07:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.201 13:07:12 -- common/autotest_common.sh@862 -- # return 0 00:22:32.201 13:07:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:32.201 13:07:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:32.201 13:07:12 -- common/autotest_common.sh@10 -- # set +x 00:22:32.201 13:07:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.201 13:07:12 -- host/digest.sh@120 -- # common_target_config 00:22:32.201 13:07:12 -- host/digest.sh@43 -- # rpc_cmd 00:22:32.201 13:07:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.201 13:07:12 -- common/autotest_common.sh@10 -- # set +x 00:22:32.460 null0 00:22:32.460 [2024-12-13 13:07:13.022001] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.460 [2024-12-13 13:07:13.046142] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.460 13:07:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.460 13:07:13 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:32.460 13:07:13 -- host/digest.sh@77 -- # local rw bs qd 00:22:32.460 13:07:13 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:32.460 13:07:13 -- host/digest.sh@80 -- # rw=randread 00:22:32.460 13:07:13 -- host/digest.sh@80 -- # bs=4096 00:22:32.460 13:07:13 -- host/digest.sh@80 -- # qd=128 00:22:32.460 13:07:13 -- host/digest.sh@82 -- # bperfpid=97005 00:22:32.460 13:07:13 -- host/digest.sh@83 -- # waitforlisten 97005 /var/tmp/bperf.sock 00:22:32.460 13:07:13 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:32.460 13:07:13 -- common/autotest_common.sh@829 -- # '[' -z 97005 ']' 00:22:32.460 13:07:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:32.460 13:07:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.460 13:07:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:32.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:32.460 13:07:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.460 13:07:13 -- common/autotest_common.sh@10 -- # set +x 00:22:32.460 [2024-12-13 13:07:13.106317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:32.460 [2024-12-13 13:07:13.106646] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97005 ] 00:22:32.718 [2024-12-13 13:07:13.247204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.718 [2024-12-13 13:07:13.328585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.718 13:07:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.718 13:07:13 -- common/autotest_common.sh@862 -- # return 0 00:22:32.718 13:07:13 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:32.718 13:07:13 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:32.718 13:07:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:32.977 13:07:13 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.977 13:07:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.547 nvme0n1 00:22:33.547 13:07:14 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:33.547 13:07:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.547 Running I/O for 2 seconds... 00:22:35.501 00:22:35.501 Latency(us) 00:22:35.501 [2024-12-13T13:07:16.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.501 [2024-12-13T13:07:16.277Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:35.501 nvme0n1 : 2.00 21683.84 84.70 0.00 0.00 5894.10 2546.97 14954.12 00:22:35.501 [2024-12-13T13:07:16.277Z] =================================================================================================================== 00:22:35.501 [2024-12-13T13:07:16.277Z] Total : 21683.84 84.70 0.00 0.00 5894.10 2546.97 14954.12 00:22:35.501 0 00:22:35.501 13:07:16 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:35.501 13:07:16 -- host/digest.sh@92 -- # get_accel_stats 00:22:35.501 13:07:16 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:35.501 13:07:16 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:35.501 | select(.opcode=="crc32c") 00:22:35.501 | "\(.module_name) \(.executed)"' 00:22:35.501 13:07:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:35.759 13:07:16 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:35.759 13:07:16 -- host/digest.sh@93 -- # exp_module=software 00:22:35.759 13:07:16 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:35.759 13:07:16 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:35.759 13:07:16 -- host/digest.sh@97 -- # killprocess 97005 00:22:35.759 13:07:16 -- common/autotest_common.sh@936 -- # '[' -z 97005 ']' 00:22:35.759 13:07:16 -- common/autotest_common.sh@940 -- # kill -0 97005 00:22:35.759 13:07:16 -- common/autotest_common.sh@941 -- # uname 00:22:35.759 13:07:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:35.759 13:07:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97005 00:22:35.759 killing process with pid 97005 00:22:35.759 Received shutdown signal, test time was about 2.000000 seconds 00:22:35.759 00:22:35.759 Latency(us) 00:22:35.759 [2024-12-13T13:07:16.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.759 [2024-12-13T13:07:16.535Z] =================================================================================================================== 00:22:35.759 [2024-12-13T13:07:16.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.759 13:07:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:35.759 13:07:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:35.759 13:07:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97005' 00:22:35.759 13:07:16 -- common/autotest_common.sh@955 -- # kill 97005 00:22:35.759 13:07:16 -- common/autotest_common.sh@960 -- # wait 97005 00:22:36.017 13:07:16 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:36.017 13:07:16 -- host/digest.sh@77 -- # local rw bs qd 00:22:36.017 13:07:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:36.017 13:07:16 -- host/digest.sh@80 -- # rw=randread 00:22:36.017 13:07:16 -- host/digest.sh@80 -- # bs=131072 00:22:36.017 13:07:16 -- host/digest.sh@80 -- # qd=16 00:22:36.017 13:07:16 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:36.017 13:07:16 -- host/digest.sh@82 -- # bperfpid=97082 00:22:36.017 13:07:16 -- host/digest.sh@83 -- # waitforlisten 97082 /var/tmp/bperf.sock 00:22:36.017 13:07:16 -- common/autotest_common.sh@829 -- # '[' -z 97082 ']' 00:22:36.017 13:07:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:36.017 13:07:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.017 13:07:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:36.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:36.017 13:07:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.017 13:07:16 -- common/autotest_common.sh@10 -- # set +x 00:22:36.017 [2024-12-13 13:07:16.711289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:36.017 [2024-12-13 13:07:16.711387] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97082 ] 00:22:36.017 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:36.017 Zero copy mechanism will not be used. 00:22:36.276 [2024-12-13 13:07:16.838560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.276 [2024-12-13 13:07:16.895697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.276 13:07:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.276 13:07:16 -- common/autotest_common.sh@862 -- # return 0 00:22:36.276 13:07:16 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:36.276 13:07:16 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:36.276 13:07:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:36.843 13:07:17 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:36.843 13:07:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.102 nvme0n1 00:22:37.102 13:07:17 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:37.102 13:07:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.102 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:37.102 Zero copy mechanism will not be used. 00:22:37.102 Running I/O for 2 seconds... 00:22:39.005 00:22:39.005 Latency(us) 00:22:39.005 [2024-12-13T13:07:19.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.005 [2024-12-13T13:07:19.781Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:39.005 nvme0n1 : 2.00 9849.90 1231.24 0.00 0.00 1621.62 647.91 9651.67 00:22:39.005 [2024-12-13T13:07:19.781Z] =================================================================================================================== 00:22:39.005 [2024-12-13T13:07:19.781Z] Total : 9849.90 1231.24 0.00 0.00 1621.62 647.91 9651.67 00:22:39.005 0 00:22:39.005 13:07:19 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:39.005 13:07:19 -- host/digest.sh@92 -- # get_accel_stats 00:22:39.005 13:07:19 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:39.005 13:07:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:39.005 13:07:19 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:39.005 | select(.opcode=="crc32c") 00:22:39.005 | "\(.module_name) \(.executed)"' 00:22:39.572 13:07:20 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:39.572 13:07:20 -- host/digest.sh@93 -- # exp_module=software 00:22:39.572 13:07:20 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:39.572 13:07:20 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:39.572 13:07:20 -- host/digest.sh@97 -- # killprocess 97082 00:22:39.572 13:07:20 -- common/autotest_common.sh@936 -- # '[' -z 97082 ']' 00:22:39.572 13:07:20 -- common/autotest_common.sh@940 -- # kill -0 97082 00:22:39.572 13:07:20 -- common/autotest_common.sh@941 -- # uname 00:22:39.572 13:07:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.572 13:07:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97082 00:22:39.572 killing process with pid 97082 00:22:39.572 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.572 00:22:39.572 Latency(us) 00:22:39.572 [2024-12-13T13:07:20.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.572 [2024-12-13T13:07:20.348Z] =================================================================================================================== 00:22:39.572 [2024-12-13T13:07:20.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.572 13:07:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:39.572 13:07:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:39.572 13:07:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97082' 00:22:39.572 13:07:20 -- common/autotest_common.sh@955 -- # kill 97082 00:22:39.572 13:07:20 -- common/autotest_common.sh@960 -- # wait 97082 00:22:39.572 13:07:20 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:39.572 13:07:20 -- host/digest.sh@77 -- # local rw bs qd 00:22:39.572 13:07:20 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:39.572 13:07:20 -- host/digest.sh@80 -- # rw=randwrite 00:22:39.572 13:07:20 -- host/digest.sh@80 -- # bs=4096 00:22:39.572 13:07:20 -- host/digest.sh@80 -- # qd=128 00:22:39.572 13:07:20 -- host/digest.sh@82 -- # bperfpid=97153 00:22:39.572 13:07:20 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:39.572 13:07:20 -- host/digest.sh@83 -- # waitforlisten 97153 /var/tmp/bperf.sock 00:22:39.572 13:07:20 -- common/autotest_common.sh@829 -- # '[' -z 97153 ']' 00:22:39.572 13:07:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:39.572 13:07:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.572 13:07:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:39.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:39.572 13:07:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.572 13:07:20 -- common/autotest_common.sh@10 -- # set +x 00:22:39.572 [2024-12-13 13:07:20.325247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:39.572 [2024-12-13 13:07:20.325356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97153 ] 00:22:39.831 [2024-12-13 13:07:20.449907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.831 [2024-12-13 13:07:20.514189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.767 13:07:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.767 13:07:21 -- common/autotest_common.sh@862 -- # return 0 00:22:40.767 13:07:21 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:40.767 13:07:21 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:40.767 13:07:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:41.026 13:07:21 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.026 13:07:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.285 nvme0n1 00:22:41.285 13:07:21 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:41.285 13:07:21 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:41.285 Running I/O for 2 seconds... 00:22:43.817 00:22:43.817 Latency(us) 00:22:43.817 [2024-12-13T13:07:24.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.817 [2024-12-13T13:07:24.593Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:43.817 nvme0n1 : 2.00 27072.15 105.75 0.00 0.00 4723.51 1854.37 8757.99 00:22:43.817 [2024-12-13T13:07:24.593Z] =================================================================================================================== 00:22:43.817 [2024-12-13T13:07:24.593Z] Total : 27072.15 105.75 0.00 0.00 4723.51 1854.37 8757.99 00:22:43.817 0 00:22:43.817 13:07:24 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:43.817 13:07:24 -- host/digest.sh@92 -- # get_accel_stats 00:22:43.817 13:07:24 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:43.817 13:07:24 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:43.817 | select(.opcode=="crc32c") 00:22:43.817 | "\(.module_name) \(.executed)"' 00:22:43.817 13:07:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:43.817 13:07:24 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:43.817 13:07:24 -- host/digest.sh@93 -- # exp_module=software 00:22:43.817 13:07:24 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:43.817 13:07:24 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:43.817 13:07:24 -- host/digest.sh@97 -- # killprocess 97153 00:22:43.817 13:07:24 -- common/autotest_common.sh@936 -- # '[' -z 97153 ']' 00:22:43.817 13:07:24 -- common/autotest_common.sh@940 -- # kill -0 97153 00:22:43.817 13:07:24 -- common/autotest_common.sh@941 -- # uname 00:22:43.817 13:07:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:43.818 13:07:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97153 00:22:43.818 killing process with pid 97153 00:22:43.818 Received shutdown signal, test time was about 2.000000 seconds 00:22:43.818 00:22:43.818 Latency(us) 00:22:43.818 [2024-12-13T13:07:24.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.818 [2024-12-13T13:07:24.594Z] =================================================================================================================== 00:22:43.818 [2024-12-13T13:07:24.594Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.818 13:07:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:43.818 13:07:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:43.818 13:07:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97153' 00:22:43.818 13:07:24 -- common/autotest_common.sh@955 -- # kill 97153 00:22:43.818 13:07:24 -- common/autotest_common.sh@960 -- # wait 97153 00:22:43.818 13:07:24 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:43.818 13:07:24 -- host/digest.sh@77 -- # local rw bs qd 00:22:43.818 13:07:24 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:43.818 13:07:24 -- host/digest.sh@80 -- # rw=randwrite 00:22:43.818 13:07:24 -- host/digest.sh@80 -- # bs=131072 00:22:43.818 13:07:24 -- host/digest.sh@80 -- # qd=16 00:22:43.818 13:07:24 -- host/digest.sh@82 -- # bperfpid=97239 00:22:43.818 13:07:24 -- host/digest.sh@83 -- # waitforlisten 97239 /var/tmp/bperf.sock 00:22:43.818 13:07:24 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:43.818 13:07:24 -- common/autotest_common.sh@829 -- # '[' -z 97239 ']' 00:22:43.818 13:07:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:43.818 13:07:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.818 13:07:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:43.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:43.818 13:07:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.818 13:07:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.077 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:44.077 Zero copy mechanism will not be used. 00:22:44.077 [2024-12-13 13:07:24.601028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:44.077 [2024-12-13 13:07:24.601148] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97239 ] 00:22:44.077 [2024-12-13 13:07:24.735734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.077 [2024-12-13 13:07:24.819427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.013 13:07:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.013 13:07:25 -- common/autotest_common.sh@862 -- # return 0 00:22:45.013 13:07:25 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:45.013 13:07:25 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:45.013 13:07:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:45.271 13:07:25 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.271 13:07:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.530 nvme0n1 00:22:45.530 13:07:26 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:45.530 13:07:26 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:45.530 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:45.530 Zero copy mechanism will not be used. 00:22:45.530 Running I/O for 2 seconds... 00:22:48.063 00:22:48.063 Latency(us) 00:22:48.063 [2024-12-13T13:07:28.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.063 [2024-12-13T13:07:28.839Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:48.063 nvme0n1 : 2.00 7718.65 964.83 0.00 0.00 2068.25 1608.61 6613.18 00:22:48.063 [2024-12-13T13:07:28.839Z] =================================================================================================================== 00:22:48.063 [2024-12-13T13:07:28.839Z] Total : 7718.65 964.83 0.00 0.00 2068.25 1608.61 6613.18 00:22:48.063 0 00:22:48.063 13:07:28 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:48.063 13:07:28 -- host/digest.sh@92 -- # get_accel_stats 00:22:48.063 13:07:28 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:48.063 13:07:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:48.063 13:07:28 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:48.063 | select(.opcode=="crc32c") 00:22:48.063 | "\(.module_name) \(.executed)"' 00:22:48.063 13:07:28 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:48.063 13:07:28 -- host/digest.sh@93 -- # exp_module=software 00:22:48.063 13:07:28 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:48.063 13:07:28 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:48.063 13:07:28 -- host/digest.sh@97 -- # killprocess 97239 00:22:48.063 13:07:28 -- common/autotest_common.sh@936 -- # '[' -z 97239 ']' 00:22:48.063 13:07:28 -- common/autotest_common.sh@940 -- # kill -0 97239 00:22:48.063 13:07:28 -- common/autotest_common.sh@941 -- # uname 00:22:48.063 13:07:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.063 13:07:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97239 00:22:48.063 killing process with pid 97239 00:22:48.063 Received shutdown signal, test time was about 2.000000 seconds 00:22:48.063 00:22:48.063 Latency(us) 00:22:48.063 [2024-12-13T13:07:28.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.063 [2024-12-13T13:07:28.839Z] =================================================================================================================== 00:22:48.063 [2024-12-13T13:07:28.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.063 13:07:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:48.063 13:07:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:48.063 13:07:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97239' 00:22:48.063 13:07:28 -- common/autotest_common.sh@955 -- # kill 97239 00:22:48.063 13:07:28 -- common/autotest_common.sh@960 -- # wait 97239 00:22:48.063 13:07:28 -- host/digest.sh@126 -- # killprocess 96969 00:22:48.063 13:07:28 -- common/autotest_common.sh@936 -- # '[' -z 96969 ']' 00:22:48.063 13:07:28 -- common/autotest_common.sh@940 -- # kill -0 96969 00:22:48.063 13:07:28 -- common/autotest_common.sh@941 -- # uname 00:22:48.063 13:07:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.063 13:07:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96969 00:22:48.063 killing process with pid 96969 00:22:48.063 13:07:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:48.063 13:07:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:48.063 13:07:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96969' 00:22:48.063 13:07:28 -- common/autotest_common.sh@955 -- # kill 96969 00:22:48.063 13:07:28 -- common/autotest_common.sh@960 -- # wait 96969 00:22:48.322 ************************************ 00:22:48.322 END TEST nvmf_digest_clean 00:22:48.322 ************************************ 00:22:48.322 00:22:48.322 real 0m16.433s 00:22:48.322 user 0m31.140s 00:22:48.322 sys 0m4.690s 00:22:48.323 13:07:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:48.323 13:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:48.323 13:07:29 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:48.323 13:07:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:48.323 13:07:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:48.323 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:48.323 ************************************ 00:22:48.323 START TEST nvmf_digest_error 00:22:48.323 ************************************ 00:22:48.323 13:07:29 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:48.323 13:07:29 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:48.323 13:07:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:48.323 13:07:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:48.323 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:48.323 13:07:29 -- nvmf/common.sh@469 -- # nvmfpid=97358 00:22:48.323 13:07:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:48.323 13:07:29 -- nvmf/common.sh@470 -- # waitforlisten 97358 00:22:48.323 13:07:29 -- common/autotest_common.sh@829 -- # '[' -z 97358 ']' 00:22:48.323 13:07:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.323 13:07:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.323 13:07:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.323 13:07:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.323 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:48.323 [2024-12-13 13:07:29.096478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:48.323 [2024-12-13 13:07:29.096566] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.582 [2024-12-13 13:07:29.226532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.582 [2024-12-13 13:07:29.286934] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.582 [2024-12-13 13:07:29.287137] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.582 [2024-12-13 13:07:29.287152] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.582 [2024-12-13 13:07:29.287160] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.582 [2024-12-13 13:07:29.287186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.582 13:07:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.582 13:07:29 -- common/autotest_common.sh@862 -- # return 0 00:22:48.582 13:07:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:48.582 13:07:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:48.582 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:48.840 13:07:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.840 13:07:29 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:48.840 13:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.840 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:48.840 [2024-12-13 13:07:29.395699] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:48.840 13:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.840 13:07:29 -- host/digest.sh@104 -- # common_target_config 00:22:48.840 13:07:29 -- host/digest.sh@43 -- # rpc_cmd 00:22:48.840 13:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.840 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:48.840 null0 00:22:48.840 [2024-12-13 13:07:29.503913] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.840 [2024-12-13 13:07:29.528052] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.840 13:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.841 13:07:29 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:48.841 13:07:29 -- host/digest.sh@54 -- # local rw bs qd 00:22:48.841 13:07:29 -- host/digest.sh@56 -- # rw=randread 00:22:48.841 13:07:29 -- host/digest.sh@56 -- # bs=4096 00:22:48.841 13:07:29 -- host/digest.sh@56 -- # qd=128 00:22:48.841 13:07:29 -- host/digest.sh@58 -- # bperfpid=97383 00:22:48.841 13:07:29 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:48.841 13:07:29 -- host/digest.sh@60 -- # waitforlisten 97383 /var/tmp/bperf.sock 00:22:48.841 13:07:29 -- common/autotest_common.sh@829 -- # '[' -z 97383 ']' 00:22:48.841 13:07:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:48.841 13:07:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.841 13:07:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:48.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:48.841 13:07:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.841 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:48.841 [2024-12-13 13:07:29.577518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:48.841 [2024-12-13 13:07:29.577615] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97383 ] 00:22:49.099 [2024-12-13 13:07:29.711205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.099 [2024-12-13 13:07:29.781369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.035 13:07:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.035 13:07:30 -- common/autotest_common.sh@862 -- # return 0 00:22:50.035 13:07:30 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:50.035 13:07:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:50.295 13:07:30 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:50.295 13:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.295 13:07:30 -- common/autotest_common.sh@10 -- # set +x 00:22:50.295 13:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.295 13:07:30 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.295 13:07:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.554 nvme0n1 00:22:50.554 13:07:31 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:50.554 13:07:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.554 13:07:31 -- common/autotest_common.sh@10 -- # set +x 00:22:50.554 13:07:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.554 13:07:31 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:50.554 13:07:31 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:50.813 Running I/O for 2 seconds... 00:22:50.813 [2024-12-13 13:07:31.372338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.813 [2024-12-13 13:07:31.372406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.813 [2024-12-13 13:07:31.372438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.813 [2024-12-13 13:07:31.382311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.813 [2024-12-13 13:07:31.382362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.813 [2024-12-13 13:07:31.382390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.813 [2024-12-13 13:07:31.395884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.813 [2024-12-13 13:07:31.395933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.813 [2024-12-13 13:07:31.395962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.813 [2024-12-13 13:07:31.409692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.813 [2024-12-13 13:07:31.409769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.813 [2024-12-13 13:07:31.409783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.813 [2024-12-13 13:07:31.421590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.813 [2024-12-13 13:07:31.421641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.813 [2024-12-13 13:07:31.421668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.813 [2024-12-13 13:07:31.435568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.813 [2024-12-13 13:07:31.435634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.813 [2024-12-13 13:07:31.435662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.813 [2024-12-13 13:07:31.449323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.813 [2024-12-13 13:07:31.449374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.813 [2024-12-13 13:07:31.449402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.813 [2024-12-13 13:07:31.459829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.813 [2024-12-13 13:07:31.459877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.813 [2024-12-13 13:07:31.459905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.813 [2024-12-13 13:07:31.471411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.471478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.471505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.482928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.482979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.483007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.492789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.492838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.492865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.502966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.503016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.503043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.514750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.514812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.514841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.525558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.525608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.525636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.534454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.534503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.534531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.546574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.546624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.546652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.558493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.558542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.558569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.568590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.568639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.568667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.578646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.578695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.578722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.814 [2024-12-13 13:07:31.588264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:50.814 [2024-12-13 13:07:31.588313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.814 [2024-12-13 13:07:31.588340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.073 [2024-12-13 13:07:31.599856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.073 [2024-12-13 13:07:31.599905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.073 [2024-12-13 13:07:31.599932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.073 [2024-12-13 13:07:31.609784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.073 [2024-12-13 13:07:31.609834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.073 [2024-12-13 13:07:31.609862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.073 [2024-12-13 13:07:31.621493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.073 [2024-12-13 13:07:31.621543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.073 [2024-12-13 13:07:31.621570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.073 [2024-12-13 13:07:31.631350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.073 [2024-12-13 13:07:31.631402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.073 [2024-12-13 13:07:31.631431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.073 [2024-12-13 13:07:31.644423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.073 [2024-12-13 13:07:31.644473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.073 [2024-12-13 13:07:31.644500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.073 [2024-12-13 13:07:31.654784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.073 [2024-12-13 13:07:31.654832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.073 [2024-12-13 13:07:31.654859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.073 [2024-12-13 13:07:31.665691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.073 [2024-12-13 13:07:31.665738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.073 [2024-12-13 13:07:31.665796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.073 [2024-12-13 13:07:31.678609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.073 [2024-12-13 13:07:31.678657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.073 [2024-12-13 13:07:31.678684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.073 [2024-12-13 13:07:31.692480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.073 [2024-12-13 13:07:31.692529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.073 [2024-12-13 13:07:31.692557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.704447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.704496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.704523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.716535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.716585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.716613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.727763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.727821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.727850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.739608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.739657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.739684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.752095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.752144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.752171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.761843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.761892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.761919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.772538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.772588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.772616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.783290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.783343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.783371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.793964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.794015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.794043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.804478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.804528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.804555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.815527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.815592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.815620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.826017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.826067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.826095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.074 [2024-12-13 13:07:31.839388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.074 [2024-12-13 13:07:31.839440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.074 [2024-12-13 13:07:31.839482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.851366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.851420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.851434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.865223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.865274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.865302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.880798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.880861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.880876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.892238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.892286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.892315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.904109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.904159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.904187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.916510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.916561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.916588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.926154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.926203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.926231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.939373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.939439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.939466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.952029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.952078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.952106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.963908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.963957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.963985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.974491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.974541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.984220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.984270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.984297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:31.995020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:31.995070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:31.995098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:32.005155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:32.005204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:32.005231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:32.017158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:32.017207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:32.017236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:32.030187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:32.030235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:32.030262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:32.039695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:32.039768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:32.039782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:32.050793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:32.050838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:32.050850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:32.065875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:32.065925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:32.065945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:32.078424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:32.078474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:32.078486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:32.088221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:32.088269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:32.088280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.334 [2024-12-13 13:07:32.099739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.334 [2024-12-13 13:07:32.099796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.334 [2024-12-13 13:07:32.099825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.593 [2024-12-13 13:07:32.112585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.593 [2024-12-13 13:07:32.112635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.593 [2024-12-13 13:07:32.112648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.593 [2024-12-13 13:07:32.126396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.593 [2024-12-13 13:07:32.126445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.593 [2024-12-13 13:07:32.126457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.593 [2024-12-13 13:07:32.140802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.593 [2024-12-13 13:07:32.140830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.593 [2024-12-13 13:07:32.140841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.154831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.154879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.154891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.166284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.166332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.166344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.176841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.176889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.176901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.191048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.191097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.191135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.204703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.204759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.204773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.217914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.217962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.217973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.230704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.230760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.230773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.241575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.241624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.241636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.252468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.252517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.252529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.265054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.265106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.265135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.276086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.276136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.276164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.286735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.286791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.286819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.296073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.296122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.296150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.308209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.308259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.308288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.319561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.319625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.319654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.330024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.330074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.330102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.340489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.340541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.340569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.352150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.352200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.352228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.594 [2024-12-13 13:07:32.362091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.594 [2024-12-13 13:07:32.362141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.594 [2024-12-13 13:07:32.362169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.852 [2024-12-13 13:07:32.372455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.852 [2024-12-13 13:07:32.372506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.852 [2024-12-13 13:07:32.372535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.852 [2024-12-13 13:07:32.383078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.383185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.393812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.393841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.393868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.403245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.403297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.403326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.416417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.416468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.416496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.426608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.426657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.426685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.436908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.436958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.436986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.445767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.445816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.445844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.456454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.456505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.456534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.467914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.467963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.467991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.477842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.477891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.477919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.490577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.490626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.490654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.499390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.499441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.499453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.512341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.512389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.512417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.525231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.525280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.525324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.538867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.538917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.538945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.552053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.552105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.552132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.564783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.564832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.564860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.578627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.578677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.578705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.592076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.592125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.592153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.603663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.603712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.603740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.612725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.612784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.612812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.853 [2024-12-13 13:07:32.622555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:51.853 [2024-12-13 13:07:32.622605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.853 [2024-12-13 13:07:32.622633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.635676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.635728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.635767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.644444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.644493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.644521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.656936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.656987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.657015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.668488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.668522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.668549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.679242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.679277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.679306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.688594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.688643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.688670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.700081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.700130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.700158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.713561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.713611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.713640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.727140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.727191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.727219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.739847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.739890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.739918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.749798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.749843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.749870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.764119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.764185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.764213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.774655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.774703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.774731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.784165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.784213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.784241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.798071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.798120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.798147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.808564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.808608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.808636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.818769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.818817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.818845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.829044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.829109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.829137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.839339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.839393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.839407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.849569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.849616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.849643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.861272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.861321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.861349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.872373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.872406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.872434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.112 [2024-12-13 13:07:32.886403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.112 [2024-12-13 13:07:32.886439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.112 [2024-12-13 13:07:32.886468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.371 [2024-12-13 13:07:32.902722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.371 [2024-12-13 13:07:32.902798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.371 [2024-12-13 13:07:32.902812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.371 [2024-12-13 13:07:32.917669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.371 [2024-12-13 13:07:32.917718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.371 [2024-12-13 13:07:32.917746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.371 [2024-12-13 13:07:32.929418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.371 [2024-12-13 13:07:32.929466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.371 [2024-12-13 13:07:32.929495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.371 [2024-12-13 13:07:32.939273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.371 [2024-12-13 13:07:32.939324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.371 [2024-12-13 13:07:32.939352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.371 [2024-12-13 13:07:32.952874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.371 [2024-12-13 13:07:32.952922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.371 [2024-12-13 13:07:32.952951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.371 [2024-12-13 13:07:32.964096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.371 [2024-12-13 13:07:32.964145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.371 [2024-12-13 13:07:32.964173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:32.976074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:32.976124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:32.976152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:32.985445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:32.985495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:32.985523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:32.995444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:32.995495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:32.995523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.006864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.006914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.006942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.019183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.019250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.019278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.029484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.029518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.029545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.044065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.044100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.044128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.057694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.057729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.057783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.071372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.071425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.071469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.085047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.085097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.085125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.094382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.094432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.094460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.107959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.108009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.108037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.121676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.121725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.121753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.372 [2024-12-13 13:07:33.135437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.372 [2024-12-13 13:07:33.135475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.372 [2024-12-13 13:07:33.135488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.151731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.151777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.151791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.164724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.164783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.164811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.175421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.175489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.175532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.189562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.189613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.189641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.201296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.201346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.201374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.211241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.211293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.211323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.225640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.225690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.225718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.239234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.239287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.239316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.252521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.252569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.252598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.261643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.261693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.261705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.274388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.274440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.274470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.286701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.286776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.286806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.300254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.300303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.300314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.310547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.310594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.310606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.324077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.324109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.324121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.337565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.337614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.337626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 [2024-12-13 13:07:33.351545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5f7f0) 00:22:52.635 [2024-12-13 13:07:33.351607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.635 [2024-12-13 13:07:33.351619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.635 00:22:52.635 Latency(us) 00:22:52.635 [2024-12-13T13:07:33.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.635 [2024-12-13T13:07:33.411Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:52.635 nvme0n1 : 2.00 21548.94 84.18 0.00 0.00 5933.92 2725.70 18230.92 00:22:52.635 [2024-12-13T13:07:33.411Z] =================================================================================================================== 00:22:52.635 [2024-12-13T13:07:33.412Z] Total : 21548.94 84.18 0.00 0.00 5933.92 2725.70 18230.92 00:22:52.636 0 00:22:52.636 13:07:33 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:52.636 13:07:33 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:52.636 13:07:33 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:52.636 | .driver_specific 00:22:52.636 | .nvme_error 00:22:52.636 | .status_code 00:22:52.636 | .command_transient_transport_error' 00:22:52.636 13:07:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:52.905 13:07:33 -- host/digest.sh@71 -- # (( 169 > 0 )) 00:22:52.905 13:07:33 -- host/digest.sh@73 -- # killprocess 97383 00:22:52.905 13:07:33 -- common/autotest_common.sh@936 -- # '[' -z 97383 ']' 00:22:52.905 13:07:33 -- common/autotest_common.sh@940 -- # kill -0 97383 00:22:52.905 13:07:33 -- common/autotest_common.sh@941 -- # uname 00:22:52.905 13:07:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.905 13:07:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97383 00:22:53.164 13:07:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:53.164 13:07:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:53.164 killing process with pid 97383 00:22:53.164 13:07:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97383' 00:22:53.164 13:07:33 -- common/autotest_common.sh@955 -- # kill 97383 00:22:53.164 Received shutdown signal, test time was about 2.000000 seconds 00:22:53.164 00:22:53.164 Latency(us) 00:22:53.164 [2024-12-13T13:07:33.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.164 [2024-12-13T13:07:33.940Z] =================================================================================================================== 00:22:53.164 [2024-12-13T13:07:33.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.164 13:07:33 -- common/autotest_common.sh@960 -- # wait 97383 00:22:53.164 13:07:33 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:53.164 13:07:33 -- host/digest.sh@54 -- # local rw bs qd 00:22:53.164 13:07:33 -- host/digest.sh@56 -- # rw=randread 00:22:53.164 13:07:33 -- host/digest.sh@56 -- # bs=131072 00:22:53.164 13:07:33 -- host/digest.sh@56 -- # qd=16 00:22:53.164 13:07:33 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:53.164 13:07:33 -- host/digest.sh@58 -- # bperfpid=97473 00:22:53.164 13:07:33 -- host/digest.sh@60 -- # waitforlisten 97473 /var/tmp/bperf.sock 00:22:53.164 13:07:33 -- common/autotest_common.sh@829 -- # '[' -z 97473 ']' 00:22:53.164 13:07:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:53.164 13:07:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:53.164 13:07:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:53.164 13:07:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.164 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:22:53.164 [2024-12-13 13:07:33.927305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:53.164 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:53.164 Zero copy mechanism will not be used. 00:22:53.164 [2024-12-13 13:07:33.927385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97473 ] 00:22:53.423 [2024-12-13 13:07:34.058217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.423 [2024-12-13 13:07:34.123194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.359 13:07:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.359 13:07:34 -- common/autotest_common.sh@862 -- # return 0 00:22:54.359 13:07:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:54.359 13:07:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:54.617 13:07:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:54.617 13:07:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.617 13:07:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.617 13:07:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.617 13:07:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.617 13:07:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.876 nvme0n1 00:22:54.876 13:07:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:54.876 13:07:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.876 13:07:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.876 13:07:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.876 13:07:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:54.876 13:07:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:54.876 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:54.876 Zero copy mechanism will not be used. 00:22:54.876 Running I/O for 2 seconds... 00:22:54.876 [2024-12-13 13:07:35.640270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:54.876 [2024-12-13 13:07:35.640332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.876 [2024-12-13 13:07:35.640363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.876 [2024-12-13 13:07:35.644594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:54.876 [2024-12-13 13:07:35.644647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.876 [2024-12-13 13:07:35.644676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:54.876 [2024-12-13 13:07:35.648854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:54.876 [2024-12-13 13:07:35.648938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.876 [2024-12-13 13:07:35.648952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.653088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.653158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.653171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.656907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.656943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.656957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.660953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.660990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.661019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.665148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.665215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.665243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.669469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.669536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.669565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.673516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.673567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.673595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.677572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.677623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.677652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.681821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.681857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.681887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.685975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.686011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.686040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.689255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.689305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.689333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.693295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.693345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.693373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.697052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.137 [2024-12-13 13:07:35.697106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.137 [2024-12-13 13:07:35.697135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.137 [2024-12-13 13:07:35.700516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.700569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.700583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.704404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.704458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.704486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.708353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.708402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.708431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.711349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.711389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.711402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.715267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.715305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.715334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.719221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.719259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.719272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.722295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.722348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.722361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.726237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.726290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.726319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.730219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.730271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.730298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.733406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.733458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.733485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.736820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.736872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.736900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.740656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.740709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.740737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.743722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.743783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.743811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.747065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.747122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.747153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.751089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.751165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.751194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.754650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.754700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.754728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.757644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.757694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.757723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.761226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.761278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.761307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.764979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.765031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.765059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.768231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.768281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.768309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.772231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.772283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.772311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.775412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.775448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.775477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.779522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.779573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.779601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.783181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.783234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.783248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.786895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.786929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.786957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.790953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.791005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.791034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.793880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.793929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.793957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.138 [2024-12-13 13:07:35.796879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.138 [2024-12-13 13:07:35.796930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.138 [2024-12-13 13:07:35.796957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.800852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.800902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.800930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.804865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.804915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.804943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.808031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.808081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.808109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.810931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.810981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.811010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.814768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.814832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.814863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.817819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.817869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.817897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.821505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.821557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.821586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.824634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.824684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.824712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.828206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.828258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.828286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.831918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.831969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.831998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.835929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.835979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.836007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.839355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.839423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.839435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.843612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.843662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.843690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.846682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.846733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.846787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.850559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.850612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.850640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.854325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.854377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.854406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.857475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.857523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.857551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.861256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.861307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.861336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.864782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.864831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.864859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.868701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.868774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.868787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.872135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.872184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.872212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.875769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.875831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.875859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.878824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.878873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.878901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.881621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.881671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.881699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.885556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.885606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.885633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.889731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.889810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.889839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.892611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.139 [2024-12-13 13:07:35.892660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.139 [2024-12-13 13:07:35.892688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.139 [2024-12-13 13:07:35.896028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.140 [2024-12-13 13:07:35.896078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.140 [2024-12-13 13:07:35.896107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.140 [2024-12-13 13:07:35.899521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.140 [2024-12-13 13:07:35.899572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.140 [2024-12-13 13:07:35.899601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.140 [2024-12-13 13:07:35.903134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.140 [2024-12-13 13:07:35.903186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.140 [2024-12-13 13:07:35.903198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.140 [2024-12-13 13:07:35.906581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.140 [2024-12-13 13:07:35.906630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.140 [2024-12-13 13:07:35.906658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.140 [2024-12-13 13:07:35.910261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.140 [2024-12-13 13:07:35.910312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.140 [2024-12-13 13:07:35.910340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.914235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.914285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.914313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.918204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.918238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.918249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.921720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.921795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.921823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.925252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.925302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.925330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.929221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.929270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.929298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.932783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.932832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.932860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.936144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.936176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.936204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.940209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.940242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.940270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.943716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.943776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.943805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.947740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.947798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.947826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.951433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.951484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.951512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.955076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.955119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.955133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.959162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.959200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.959213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.963274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.963328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.963341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.967530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.967579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.967621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.400 [2024-12-13 13:07:35.970962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.400 [2024-12-13 13:07:35.971012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.400 [2024-12-13 13:07:35.971041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:35.974837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:35.974866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:35.974893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:35.978887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:35.978921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:35.978948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:35.982646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:35.982693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:35.982721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:35.986144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:35.986193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:35.986221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:35.989772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:35.989829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:35.989856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:35.993448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:35.993497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:35.993524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:35.997374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:35.997423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:35.997449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.001056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.001105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.001133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.004769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.004816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.004843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.007888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.007937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.007965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.010797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.010845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.010873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.014347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.014395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.014423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.018067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.018130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.018158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.021502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.021550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.021577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.025206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.025254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.025281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.028906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.028954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.028981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.032882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.032929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.032956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.036577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.036626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.036653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.040085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.040132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.040174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.043966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.044015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.044042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.047291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.047326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.047355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.051315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.051366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.051409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.054598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.054646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.054673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.057739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.057796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.057824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.061538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.061587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.061615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.065254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.065304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.065332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.068334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.068384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.401 [2024-12-13 13:07:36.068411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.401 [2024-12-13 13:07:36.072214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.401 [2024-12-13 13:07:36.072266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.072293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.075433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.075512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.075539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.079030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.079077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.079104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.082219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.082265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.082293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.086252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.086299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.086327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.090039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.090087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.090114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.093639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.093688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.093716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.097218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.097266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.097293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.101014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.101063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.101090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.104311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.104360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.104402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.107580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.107614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.107625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.111035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.111085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.111136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.114553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.114603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.114631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.117832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.117881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.117909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.121816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.121865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.121893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.125476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.125526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.125553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.129102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.129152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.129180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.132981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.133031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.133060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.137507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.137558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.137586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.141260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.141309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.141338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.143896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.143945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.143973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.147742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.147817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.147846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.150816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.150847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.150875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.154643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.154694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.154722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.158965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.159015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.159044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.161979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.162015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.162027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.165475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.402 [2024-12-13 13:07:36.165526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.402 [2024-12-13 13:07:36.165554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.402 [2024-12-13 13:07:36.168903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.403 [2024-12-13 13:07:36.168954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.403 [2024-12-13 13:07:36.168982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.403 [2024-12-13 13:07:36.172568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.403 [2024-12-13 13:07:36.172617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.403 [2024-12-13 13:07:36.172645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.176525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.176574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.176602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.180091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.180140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.180183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.183537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.183585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.183628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.187461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.187543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.187571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.191369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.191406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.191419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.194653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.194703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.194731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.197586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.197636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.197663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.201215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.201263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.201291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.205078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.205127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.205154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.208696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.208770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.208783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.212090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.212141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.212169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.216061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.216131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.216160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.220074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.220127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.220156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.223871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.223919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.223946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.226527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.226575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.226603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.230305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.230354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.230381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.233562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.233613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.233640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.237048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.237098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.237126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.240582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.240633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.240661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.244445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.244495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.244523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.247999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.248065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.248093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.251623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.251672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.251699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.255063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.255135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.255149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.258749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.663 [2024-12-13 13:07:36.258809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.663 [2024-12-13 13:07:36.258837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.663 [2024-12-13 13:07:36.262638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.262687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.262715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.265996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.266030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.266059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.269380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.269430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.269458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.272676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.272726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.272754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.276253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.276303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.276331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.279600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.279650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.279678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.282690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.282739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.282778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.286363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.286413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.286441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.289614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.289664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.289692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.292762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.292812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.292840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.296555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.296605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.296634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.299938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.299988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.300016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.302734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.302793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.302822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.306214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.306261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.306289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.309680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.309728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.309780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.313280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.313329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.313357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.316549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.316599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.316626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.320349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.320400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.320429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.324190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.324241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.324269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.327944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.327995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.328024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.331240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.331294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.331324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.334515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.334565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.334593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.338630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.338680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.338709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.341789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.341838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.341867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.345619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.345670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.345698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.349159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.349208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.349236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.353046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.353097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.353125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.356494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.664 [2024-12-13 13:07:36.356542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.664 [2024-12-13 13:07:36.356570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.664 [2024-12-13 13:07:36.360021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.360070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.360114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.363889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.363937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.363966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.367227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.367278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.367307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.370828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.370877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.370905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.374896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.374929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.374957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.378859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.378891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.378919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.382297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.382329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.382357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.385989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.386022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.386050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.388539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.388572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.388600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.391856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.391888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.391916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.395799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.395862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.395891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.399173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.399226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.399239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.402724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.402781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.402810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.406200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.406251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.406278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.409639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.409673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.409701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.413501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.413534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.413563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.417322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.417357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.417385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.421036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.421070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.421098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.424713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.424786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.424799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.428360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.428425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.428453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.431474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.431526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.431558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.665 [2024-12-13 13:07:36.434593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.665 [2024-12-13 13:07:36.434643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.665 [2024-12-13 13:07:36.434671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.438138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.438189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.438217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.442080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.442115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.442142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.445595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.445645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.445673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.449415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.449451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.449478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.452797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.452830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.452858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.456348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.456382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.456409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.459911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.459961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.459989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.463339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.463376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.463420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.466920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.466970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.466998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.470156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.470205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.470234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.473947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.473982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.474009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.477602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.477652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.477681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.481061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.926 [2024-12-13 13:07:36.481096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.926 [2024-12-13 13:07:36.481124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.926 [2024-12-13 13:07:36.484577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.484612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.484640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.487765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.487809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.487838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.491052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.491086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.491139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.495023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.495058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.495087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.498494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.498546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.498574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.502023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.502056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.502084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.505694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.505728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.505765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.508764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.508798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.508825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.512016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.512050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.512078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.515938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.515972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.515999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.519159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.519213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.519243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.522836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.522871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.522899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.526511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.526546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.526573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.529872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.529905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.529933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.533739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.533784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.533811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.537669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.537704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.537732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.541364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.541413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.541440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.545342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.545377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.545404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.548508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.548542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.548570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.552043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.552078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.552105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.555384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.555436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.555481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.558477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.558526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.558554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.562415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.562449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.562477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.565970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.566005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.566033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.568848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.568896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.568940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.572512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.572545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.572572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.576763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.576812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.576840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.927 [2024-12-13 13:07:36.580096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.927 [2024-12-13 13:07:36.580145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.927 [2024-12-13 13:07:36.580172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.583756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.583817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.583846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.587367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.587434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.587446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.591060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.591094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.591146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.594898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.594932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.594960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.597882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.597931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.597958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.600941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.600974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.601002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.604229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.604279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.604307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.607945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.607978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.608006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.611912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.611947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.611974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.614929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.614963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.614991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.617727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.617769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.617798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.621423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.621458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.621485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.625850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.625884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.625912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.629385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.629419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.629446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.633102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.633164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.633192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.636780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.636820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.636848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.640586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.640619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.640647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.644639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.644672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.644699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.648083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.648116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.648128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.651016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.651066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.651094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.654442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.654492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.654521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.657682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.657731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.657768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.661129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.661180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.661208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.664665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.664714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.664742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.667874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.667923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.667952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.671165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.671217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.671246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.674891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.928 [2024-12-13 13:07:36.674941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.928 [2024-12-13 13:07:36.674969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.928 [2024-12-13 13:07:36.678734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.929 [2024-12-13 13:07:36.678808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.929 [2024-12-13 13:07:36.678836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.929 [2024-12-13 13:07:36.682068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.929 [2024-12-13 13:07:36.682117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.929 [2024-12-13 13:07:36.682144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.929 [2024-12-13 13:07:36.685336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.929 [2024-12-13 13:07:36.685386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.929 [2024-12-13 13:07:36.685414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.929 [2024-12-13 13:07:36.688976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.929 [2024-12-13 13:07:36.689026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.929 [2024-12-13 13:07:36.689054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.929 [2024-12-13 13:07:36.691970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.929 [2024-12-13 13:07:36.692018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.929 [2024-12-13 13:07:36.692047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.929 [2024-12-13 13:07:36.695578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.929 [2024-12-13 13:07:36.695627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.929 [2024-12-13 13:07:36.695654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.929 [2024-12-13 13:07:36.699835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:55.929 [2024-12-13 13:07:36.699895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.929 [2024-12-13 13:07:36.699923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.189 [2024-12-13 13:07:36.702916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.189 [2024-12-13 13:07:36.702965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-12-13 13:07:36.702993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.189 [2024-12-13 13:07:36.705937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.189 [2024-12-13 13:07:36.705986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-12-13 13:07:36.706014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.189 [2024-12-13 13:07:36.709434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.189 [2024-12-13 13:07:36.709483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-12-13 13:07:36.709510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.189 [2024-12-13 13:07:36.712665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.189 [2024-12-13 13:07:36.712714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-12-13 13:07:36.712742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.189 [2024-12-13 13:07:36.716656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.189 [2024-12-13 13:07:36.716705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-12-13 13:07:36.716733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.189 [2024-12-13 13:07:36.719611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.189 [2024-12-13 13:07:36.719660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-12-13 13:07:36.719687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.723074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.723147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.723177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.726517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.726565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.726592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.730240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.730289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.730317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.733297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.733346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.733374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.737242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.737290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.737318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.741124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.741174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.741202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.744913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.744960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.744988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.748380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.748428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.748456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.751783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.751841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.751870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.755322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.755373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.755416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.758493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.758541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.758569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.762110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.762158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.762186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.765536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.765586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.765613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.769044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.769093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.769121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.772309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.772359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.772387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.776263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.776312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.776340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.779764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.779822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.779851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.783013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.783062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.783090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.786336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.786385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.786413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.789516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.789566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.789593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.792722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.792781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.792810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.796465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.796515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.796543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.799265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.799301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.799330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.802440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.802489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.802517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.805890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.805938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.805967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.809817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.809865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.809905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.813415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.813464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-12-13 13:07:36.813491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.190 [2024-12-13 13:07:36.816911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.190 [2024-12-13 13:07:36.816943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.816971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.820143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.820190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.820218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.823734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.823792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.823820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.827405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.827456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.827468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.830461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.830509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.830538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.834438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.834489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.834517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.837630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.837680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.837708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.841362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.841411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.841439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.846207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.846259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.846288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.850185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.850235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.850263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.853999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.854035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.854064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.857591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.857640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.857668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.861892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.861927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.861956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.866407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.866456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.866483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.870241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.870291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.870319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.873885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.873919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.873946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.877058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.877093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.877121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.880413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.880462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.880490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.884405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.884472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.884501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.888451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.888501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.888529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.892184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.892233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.892260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.896019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.896068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.896096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.899595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.899643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.899671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.903162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.903200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.903214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.906717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.906791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.906820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.910546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.910596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.910624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.914323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.914375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.914403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.917315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.917365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.191 [2024-12-13 13:07:36.917393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.191 [2024-12-13 13:07:36.920911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.191 [2024-12-13 13:07:36.920962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.920991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.924561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.924612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.924640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.928183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.928233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.928262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.932324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.932376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.932405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.936074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.936125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.936154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.939522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.939574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.939618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.942698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.942788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.942802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.946885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.946934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.946962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.950147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.950197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.950225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.954156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.954222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.954250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.957460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.957510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.957539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.192 [2024-12-13 13:07:36.961183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.192 [2024-12-13 13:07:36.961234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.192 [2024-12-13 13:07:36.961263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.452 [2024-12-13 13:07:36.964974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.452 [2024-12-13 13:07:36.965011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.452 [2024-12-13 13:07:36.965039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.452 [2024-12-13 13:07:36.968680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.452 [2024-12-13 13:07:36.968773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.452 [2024-12-13 13:07:36.968819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.452 [2024-12-13 13:07:36.972650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.452 [2024-12-13 13:07:36.972686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.452 [2024-12-13 13:07:36.972715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.452 [2024-12-13 13:07:36.976547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.452 [2024-12-13 13:07:36.976595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.452 [2024-12-13 13:07:36.976624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.452 [2024-12-13 13:07:36.980251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:36.980303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:36.980346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:36.983984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:36.984039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:36.984053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:36.988191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:36.988242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:36.988270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:36.992367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:36.992433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:36.992461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:36.996596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:36.996648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:36.996677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:36.999157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:36.999194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:36.999223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.002736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.002793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.002822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.006398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.006448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.006476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.009861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.009894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.009922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.013628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.013677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.013706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.017084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.017134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.017163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.021737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.021800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.021829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.025738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.025797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.025825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.029783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.029844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.029873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.033680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.033730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.033767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.037184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.037234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.037262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.040375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.040426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.040453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.044738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.044810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.044839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.048367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.048416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.048444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.051712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.051785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.051815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.055226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.055264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.055294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.058821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.058869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.058897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.062055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.062119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.062147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.065334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.065385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.065413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.069028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.069077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.069105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.072425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.072474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.072502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.076241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.076290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.453 [2024-12-13 13:07:37.076318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.453 [2024-12-13 13:07:37.079819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.453 [2024-12-13 13:07:37.079877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.079906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.084271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.084320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.084348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.087551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.087600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.087641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.090686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.090735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.090773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.093523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.093571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.093599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.097286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.097336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.097364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.101094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.101143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.101171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.104783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.104840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.104868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.108062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.108111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.108139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.111308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.111344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.111372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.114177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.114225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.114253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.117303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.117353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.117381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.121091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.121140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.121169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.124466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.124516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.124544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.128406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.128455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.128483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.132004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.132054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.132082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.135299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.135351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.135379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.138247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.138298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.138326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.141737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.141795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.141822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.145225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.145275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.145303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.148804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.148851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.148879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.152470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.152519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.152546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.155969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.156019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.156047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.159645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.159694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.159722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.162951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.163001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.163029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.166255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.166305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.166349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.169690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.169765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.169778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.172850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.172899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.454 [2024-12-13 13:07:37.172927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.454 [2024-12-13 13:07:37.176189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.454 [2024-12-13 13:07:37.176238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.176265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.179515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.179567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.183311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.183364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.183377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.186444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.186493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.186520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.189696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.189768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.189781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.192917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.192963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.192990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.196651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.196698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.196725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.199972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.200036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.200063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.203505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.203554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.203582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.206789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.206836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.206863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.210183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.210231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.210258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.213733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.213791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.213819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.217264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.217312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.217339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.220637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.220686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.220713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.455 [2024-12-13 13:07:37.224358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.455 [2024-12-13 13:07:37.224406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.455 [2024-12-13 13:07:37.224434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.228008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.228060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.228073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.231797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.231853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.231865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.235324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.235360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.235374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.239535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.239569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.239582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.243327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.243365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.243378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.247183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.247222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.247235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.251260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.251310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.251323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.255546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.255598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.255641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.259088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.259161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.259175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.262703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.262786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.262804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.266060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.266108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.266136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.269559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.269609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.269636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.273147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.273196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.273224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.276213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.276261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.276289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.279159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.279196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.279224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.282632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.282681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.282709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.286754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.286815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.286844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.290725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.290786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.290815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.294037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.294087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.294115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.297452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.297488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.297517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.301287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.301347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.301390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.305301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.305351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.716 [2024-12-13 13:07:37.305378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.716 [2024-12-13 13:07:37.309241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.716 [2024-12-13 13:07:37.309290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.309318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.313189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.313238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.313266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.316987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.317035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.317063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.320429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.320477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.320503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.324012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.324062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.324090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.327648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.327698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.327726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.331366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.331406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.331419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.335073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.335162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.335176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.338375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.338425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.338453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.341939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.341973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.342001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.345187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.345236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.345264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.348248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.348296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.348323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.351157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.351195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.351208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.355018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.355067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.355095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.358481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.358532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.358560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.361841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.361874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.361903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.365504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.365555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.365583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.369559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.369611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.369639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.373001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.373052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.373080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.376648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.376698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.376726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.380123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.380187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.380215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.383825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.383886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.383915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.387480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.387531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.387559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.391207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.391257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.391286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.394598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.394647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.394675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.398351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.398400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.398427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.401419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.401468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.401496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.404823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.717 [2024-12-13 13:07:37.404871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.717 [2024-12-13 13:07:37.404898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.717 [2024-12-13 13:07:37.408346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.408396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.408424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.411836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.411882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.411910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.415051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.415100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.415168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.418676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.418725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.418753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.422033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.422068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.422095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.425809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.425858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.425886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.429380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.429429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.429457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.432890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.432938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.432967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.435800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.435858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.435887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.439437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.439503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.439532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.442935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.442983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.443010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.446555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.446606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.446634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.449967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.450018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.450045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.453614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.453662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.453690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.456992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.457042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.457070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.460157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.460206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.460234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.463980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.464028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.464056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.466974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.467024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.467052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.470833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.470882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.470911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.474303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.474353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.474381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.478162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.478238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.478266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.482238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.482289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.482317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.485969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.486021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.486065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.718 [2024-12-13 13:07:37.489582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.718 [2024-12-13 13:07:37.489633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.718 [2024-12-13 13:07:37.489645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.978 [2024-12-13 13:07:37.493226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.978 [2024-12-13 13:07:37.493293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.978 [2024-12-13 13:07:37.493321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.978 [2024-12-13 13:07:37.496637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.978 [2024-12-13 13:07:37.496692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.978 [2024-12-13 13:07:37.496720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.978 [2024-12-13 13:07:37.501042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.501093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.501122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.504324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.504375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.504418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.508231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.508284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.508313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.512798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.512874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.512903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.516597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.516647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.516676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.520449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.520498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.520526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.524363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.524412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.524440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.527945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.527994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.528021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.531597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.531647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.531675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.535023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.535073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.535101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.538204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.538265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.538304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.542252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.542303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.542331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.546283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.546333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.546361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.549731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.549790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.549818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.553626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.553675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.553703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.557841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.557890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.557917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.561658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.561705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.561733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.565637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.565686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.565713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.569021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.569070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.569098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.572826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.572874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.572901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.576368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.576416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.576444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.580074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.580126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.580155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.583938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.583988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.584015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.587347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.587385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.587398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.590985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.591035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.591063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.594396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.594444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.594473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.597859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.597909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.979 [2024-12-13 13:07:37.597937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.979 [2024-12-13 13:07:37.601480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.979 [2024-12-13 13:07:37.601529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.601557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.980 [2024-12-13 13:07:37.604264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.980 [2024-12-13 13:07:37.604313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.604341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.980 [2024-12-13 13:07:37.607861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.980 [2024-12-13 13:07:37.607909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.607937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.980 [2024-12-13 13:07:37.611799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.980 [2024-12-13 13:07:37.611857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.611885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.980 [2024-12-13 13:07:37.615426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.980 [2024-12-13 13:07:37.615492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.615520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.980 [2024-12-13 13:07:37.618202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.980 [2024-12-13 13:07:37.618251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.618279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.980 [2024-12-13 13:07:37.621527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.980 [2024-12-13 13:07:37.621575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.621603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.980 [2024-12-13 13:07:37.624822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.980 [2024-12-13 13:07:37.624853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.624881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.980 [2024-12-13 13:07:37.628114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.980 [2024-12-13 13:07:37.628163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.628191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.980 [2024-12-13 13:07:37.631530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb064a0) 00:22:56.980 [2024-12-13 13:07:37.631582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.980 [2024-12-13 13:07:37.631611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.980 00:22:56.980 Latency(us) 00:22:56.980 [2024-12-13T13:07:37.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.980 [2024-12-13T13:07:37.756Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:56.980 nvme0n1 : 2.00 8619.04 1077.38 0.00 0.00 1853.33 461.73 6047.19 00:22:56.980 [2024-12-13T13:07:37.756Z] =================================================================================================================== 00:22:56.980 [2024-12-13T13:07:37.756Z] Total : 8619.04 1077.38 0.00 0.00 1853.33 461.73 6047.19 00:22:56.980 0 00:22:56.980 13:07:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:56.980 13:07:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:56.980 13:07:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:56.980 13:07:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:56.980 | .driver_specific 00:22:56.980 | .nvme_error 00:22:56.980 | .status_code 00:22:56.980 | .command_transient_transport_error' 00:22:57.239 13:07:37 -- host/digest.sh@71 -- # (( 556 > 0 )) 00:22:57.239 13:07:37 -- host/digest.sh@73 -- # killprocess 97473 00:22:57.239 13:07:37 -- common/autotest_common.sh@936 -- # '[' -z 97473 ']' 00:22:57.239 13:07:37 -- common/autotest_common.sh@940 -- # kill -0 97473 00:22:57.239 13:07:37 -- common/autotest_common.sh@941 -- # uname 00:22:57.239 13:07:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.239 13:07:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97473 00:22:57.239 13:07:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:57.239 killing process with pid 97473 00:22:57.239 13:07:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:57.239 13:07:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97473' 00:22:57.239 Received shutdown signal, test time was about 2.000000 seconds 00:22:57.239 00:22:57.239 Latency(us) 00:22:57.239 [2024-12-13T13:07:38.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.239 [2024-12-13T13:07:38.015Z] =================================================================================================================== 00:22:57.239 [2024-12-13T13:07:38.015Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.239 13:07:37 -- common/autotest_common.sh@955 -- # kill 97473 00:22:57.239 13:07:37 -- common/autotest_common.sh@960 -- # wait 97473 00:22:57.499 13:07:38 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:57.499 13:07:38 -- host/digest.sh@54 -- # local rw bs qd 00:22:57.499 13:07:38 -- host/digest.sh@56 -- # rw=randwrite 00:22:57.499 13:07:38 -- host/digest.sh@56 -- # bs=4096 00:22:57.499 13:07:38 -- host/digest.sh@56 -- # qd=128 00:22:57.499 13:07:38 -- host/digest.sh@58 -- # bperfpid=97568 00:22:57.499 13:07:38 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:57.499 13:07:38 -- host/digest.sh@60 -- # waitforlisten 97568 /var/tmp/bperf.sock 00:22:57.499 13:07:38 -- common/autotest_common.sh@829 -- # '[' -z 97568 ']' 00:22:57.499 13:07:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:57.499 13:07:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:57.499 13:07:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:57.499 13:07:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.499 13:07:38 -- common/autotest_common.sh@10 -- # set +x 00:22:57.499 [2024-12-13 13:07:38.199179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:57.499 [2024-12-13 13:07:38.199277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97568 ] 00:22:57.757 [2024-12-13 13:07:38.330937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.757 [2024-12-13 13:07:38.400295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.693 13:07:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.693 13:07:39 -- common/autotest_common.sh@862 -- # return 0 00:22:58.693 13:07:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:58.693 13:07:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:58.693 13:07:39 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:58.693 13:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.693 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:22:58.693 13:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.693 13:07:39 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:58.693 13:07:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.261 nvme0n1 00:22:59.261 13:07:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:59.261 13:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.261 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:22:59.261 13:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.261 13:07:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:59.261 13:07:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:59.261 Running I/O for 2 seconds... 00:22:59.261 [2024-12-13 13:07:39.925041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f6890 00:22:59.261 [2024-12-13 13:07:39.925541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:39.925578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:39.936638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eff18 00:22:59.261 [2024-12-13 13:07:39.937292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:39.937357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:39.947070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f2510 00:22:59.261 [2024-12-13 13:07:39.947714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:39.947800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:39.957306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190efae0 00:22:59.261 [2024-12-13 13:07:39.957926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:39.957988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:39.967906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f2948 00:22:59.261 [2024-12-13 13:07:39.968513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:39.968544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:39.978190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f35f0 00:22:59.261 [2024-12-13 13:07:39.978784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:39.978860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:39.988970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f96f8 00:22:59.261 [2024-12-13 13:07:39.990058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:39.990104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:39.999475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eea00 00:22:59.261 [2024-12-13 13:07:40.000264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:40.000311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:40.010058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f6020 00:22:59.261 [2024-12-13 13:07:40.010790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:40.010846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:40.020190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fac10 00:22:59.261 [2024-12-13 13:07:40.020896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:40.020943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.261 [2024-12-13 13:07:40.031803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fb048 00:22:59.261 [2024-12-13 13:07:40.032451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.261 [2024-12-13 13:07:40.032485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.519 [2024-12-13 13:07:40.043225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f0788 00:22:59.520 [2024-12-13 13:07:40.044344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.044380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.054221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e9e10 00:22:59.520 [2024-12-13 13:07:40.055159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.055195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.067616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f5be8 00:22:59.520 [2024-12-13 13:07:40.068550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.068595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.076996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e38d0 00:22:59.520 [2024-12-13 13:07:40.078067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.078114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.088166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f4f40 00:22:59.520 [2024-12-13 13:07:40.089390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.089439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.100702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f6cc8 00:22:59.520 [2024-12-13 13:07:40.101912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.101958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.108358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eaef0 00:22:59.520 [2024-12-13 13:07:40.108606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.108641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.120425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ed4e8 00:22:59.520 [2024-12-13 13:07:40.121266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.121327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.129311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190edd58 00:22:59.520 [2024-12-13 13:07:40.130477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.130524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.139196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f46d0 00:22:59.520 [2024-12-13 13:07:40.140551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.140598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.150683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eff18 00:22:59.520 [2024-12-13 13:07:40.151600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.151650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.160970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f8a50 00:22:59.520 [2024-12-13 13:07:40.161880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.161936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.171334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e01f8 00:22:59.520 [2024-12-13 13:07:40.172148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.172195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.181600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f0788 00:22:59.520 [2024-12-13 13:07:40.182538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.182585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.190736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ed920 00:22:59.520 [2024-12-13 13:07:40.192398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.192445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.200757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190dece0 00:22:59.520 [2024-12-13 13:07:40.202405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.202452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.210966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ea680 00:22:59.520 [2024-12-13 13:07:40.212751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.212807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.220916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f1ca0 00:22:59.520 [2024-12-13 13:07:40.222696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.222768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.231029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de8a8 00:22:59.520 [2024-12-13 13:07:40.231611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.231644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.241942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fc998 00:22:59.520 [2024-12-13 13:07:40.242724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.242798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.251874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fbcf0 00:22:59.520 [2024-12-13 13:07:40.252621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.252668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.261861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e9e10 00:22:59.520 [2024-12-13 13:07:40.263030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.263077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.271607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e3060 00:22:59.520 [2024-12-13 13:07:40.273036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.273068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.281838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fc128 00:22:59.520 [2024-12-13 13:07:40.282601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.282646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.520 [2024-12-13 13:07:40.291698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e49b0 00:22:59.520 [2024-12-13 13:07:40.292864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-12-13 13:07:40.292920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.304879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ea680 00:22:59.782 [2024-12-13 13:07:40.306089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.306137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.312541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e3060 00:22:59.782 [2024-12-13 13:07:40.312871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.312904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.324941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e0a68 00:22:59.782 [2024-12-13 13:07:40.325771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.325841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.333720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ea248 00:22:59.782 [2024-12-13 13:07:40.334987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.335033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.344321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de038 00:22:59.782 [2024-12-13 13:07:40.345507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.345554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.355009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f96f8 00:22:59.782 [2024-12-13 13:07:40.356371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.356419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.366428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190dece0 00:22:59.782 [2024-12-13 13:07:40.367325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.367374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.375750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ea248 00:22:59.782 [2024-12-13 13:07:40.377534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.377565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.386302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fcdd0 00:22:59.782 [2024-12-13 13:07:40.388129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.388160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.395929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eff18 00:22:59.782 [2024-12-13 13:07:40.396858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.396912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.406493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f1868 00:22:59.782 [2024-12-13 13:07:40.407471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.407504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.417042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190edd58 00:22:59.782 [2024-12-13 13:07:40.418036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.418067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.427527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de038 00:22:59.782 [2024-12-13 13:07:40.428380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.428442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.439516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f3a28 00:22:59.782 [2024-12-13 13:07:40.440422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.440466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.449790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190efae0 00:22:59.782 [2024-12-13 13:07:40.450729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.450800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.459581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eb328 00:22:59.782 [2024-12-13 13:07:40.460768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.782 [2024-12-13 13:07:40.460839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.782 [2024-12-13 13:07:40.470846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f2948 00:22:59.783 [2024-12-13 13:07:40.471710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.783 [2024-12-13 13:07:40.471778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.783 [2024-12-13 13:07:40.480476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f3a28 00:22:59.783 [2024-12-13 13:07:40.481727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.783 [2024-12-13 13:07:40.481799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.783 [2024-12-13 13:07:40.492418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e7818 00:22:59.783 [2024-12-13 13:07:40.492817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.783 [2024-12-13 13:07:40.492863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.783 [2024-12-13 13:07:40.503165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f3e60 00:22:59.783 [2024-12-13 13:07:40.504015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.783 [2024-12-13 13:07:40.504064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.783 [2024-12-13 13:07:40.514010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ebb98 00:22:59.783 [2024-12-13 13:07:40.514724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.783 [2024-12-13 13:07:40.514777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.783 [2024-12-13 13:07:40.526541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de8a8 00:22:59.783 [2024-12-13 13:07:40.527937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.783 [2024-12-13 13:07:40.527982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.783 [2024-12-13 13:07:40.534321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e01f8 00:22:59.783 [2024-12-13 13:07:40.534751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.783 [2024-12-13 13:07:40.534790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.783 [2024-12-13 13:07:40.546685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de038 00:22:59.783 [2024-12-13 13:07:40.547832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.783 [2024-12-13 13:07:40.547871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.783 [2024-12-13 13:07:40.554471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fdeb0 00:22:59.783 [2024-12-13 13:07:40.554584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.783 [2024-12-13 13:07:40.554604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.567553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f2948 00:23:00.042 [2024-12-13 13:07:40.568244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.568321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.578218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eb328 00:23:00.042 [2024-12-13 13:07:40.578900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.578929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.589195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f6cc8 00:23:00.042 [2024-12-13 13:07:40.590286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.590317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.600053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f0350 00:23:00.042 [2024-12-13 13:07:40.600876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.600924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.610300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eff18 00:23:00.042 [2024-12-13 13:07:40.611146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.611211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.621326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e6b70 00:23:00.042 [2024-12-13 13:07:40.622093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.622141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.631465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de8a8 00:23:00.042 [2024-12-13 13:07:40.632230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.632275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.641918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f2510 00:23:00.042 [2024-12-13 13:07:40.642618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.642665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.652093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e8d30 00:23:00.042 [2024-12-13 13:07:40.652748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.652817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.662374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f0350 00:23:00.042 [2024-12-13 13:07:40.663022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.663084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.672579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e23b8 00:23:00.042 [2024-12-13 13:07:40.673187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.673222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.683381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eb328 00:23:00.042 [2024-12-13 13:07:40.684419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.684450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.694433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190df550 00:23:00.042 [2024-12-13 13:07:40.695076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.695146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.705291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fcdd0 00:23:00.042 [2024-12-13 13:07:40.706139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.706186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.715501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fc998 00:23:00.042 [2024-12-13 13:07:40.716277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.716324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.725475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fc128 00:23:00.042 [2024-12-13 13:07:40.726231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.726278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.735352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f0350 00:23:00.042 [2024-12-13 13:07:40.736115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.736162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.745691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ed4e8 00:23:00.042 [2024-12-13 13:07:40.746344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.042 [2024-12-13 13:07:40.746392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:00.042 [2024-12-13 13:07:40.756058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fef90 00:23:00.043 [2024-12-13 13:07:40.756673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.043 [2024-12-13 13:07:40.756737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:00.043 [2024-12-13 13:07:40.766321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ed4e8 00:23:00.043 [2024-12-13 13:07:40.766959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.043 [2024-12-13 13:07:40.767037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:00.043 [2024-12-13 13:07:40.776457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e88f8 00:23:00.043 [2024-12-13 13:07:40.777402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.043 [2024-12-13 13:07:40.777448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:00.043 [2024-12-13 13:07:40.785770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f0ff8 00:23:00.043 [2024-12-13 13:07:40.786849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.043 [2024-12-13 13:07:40.786910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:00.043 [2024-12-13 13:07:40.795965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eb328 00:23:00.043 [2024-12-13 13:07:40.796784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.043 [2024-12-13 13:07:40.796868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:00.043 [2024-12-13 13:07:40.805888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190efae0 00:23:00.043 [2024-12-13 13:07:40.806771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.043 [2024-12-13 13:07:40.806827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:00.043 [2024-12-13 13:07:40.816319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f6020 00:23:00.043 [2024-12-13 13:07:40.817285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.043 [2024-12-13 13:07:40.817331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.826892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ea248 00:23:00.302 [2024-12-13 13:07:40.827445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.827481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.838791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fef90 00:23:00.302 [2024-12-13 13:07:40.840131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.840178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.847153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e5ec8 00:23:00.302 [2024-12-13 13:07:40.848149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.848194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.858565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f9f68 00:23:00.302 [2024-12-13 13:07:40.859709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.859780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.866113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e1f80 00:23:00.302 [2024-12-13 13:07:40.866261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.866279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.879143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e6b70 00:23:00.302 [2024-12-13 13:07:40.879852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.879907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.889334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e5ec8 00:23:00.302 [2024-12-13 13:07:40.890421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.890467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.899344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ed4e8 00:23:00.302 [2024-12-13 13:07:40.900158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.900233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.909427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e1710 00:23:00.302 [2024-12-13 13:07:40.910222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.910268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.919653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e5ec8 00:23:00.302 [2024-12-13 13:07:40.920420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.920469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.929668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e3060 00:23:00.302 [2024-12-13 13:07:40.930391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.930440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.939655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e73e0 00:23:00.302 [2024-12-13 13:07:40.941063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.941111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.949726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fac10 00:23:00.302 [2024-12-13 13:07:40.950182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.950215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.959512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fc998 00:23:00.302 [2024-12-13 13:07:40.960200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.960263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.968185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fd208 00:23:00.302 [2024-12-13 13:07:40.968292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.968311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.978870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e0ea0 00:23:00.302 [2024-12-13 13:07:40.979188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.979219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.988710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f9b30 00:23:00.302 [2024-12-13 13:07:40.989185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.302 [2024-12-13 13:07:40.989221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:00.302 [2024-12-13 13:07:40.998533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190df988 00:23:00.303 [2024-12-13 13:07:40.998980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.303 [2024-12-13 13:07:40.999013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:00.303 [2024-12-13 13:07:41.008215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e9168 00:23:00.303 [2024-12-13 13:07:41.008620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.303 [2024-12-13 13:07:41.008654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:00.303 [2024-12-13 13:07:41.017943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ebfd0 00:23:00.303 [2024-12-13 13:07:41.018315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.303 [2024-12-13 13:07:41.018349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:00.303 [2024-12-13 13:07:41.027724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f92c0 00:23:00.303 [2024-12-13 13:07:41.028070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.303 [2024-12-13 13:07:41.028103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:00.303 [2024-12-13 13:07:41.037155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e84c0 00:23:00.303 [2024-12-13 13:07:41.037430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.303 [2024-12-13 13:07:41.037495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:00.303 [2024-12-13 13:07:41.046654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f92c0 00:23:00.303 [2024-12-13 13:07:41.046941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.303 [2024-12-13 13:07:41.046984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:00.303 [2024-12-13 13:07:41.056597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ebfd0 00:23:00.303 [2024-12-13 13:07:41.056876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.303 [2024-12-13 13:07:41.056900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:00.303 [2024-12-13 13:07:41.067883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e5ec8 00:23:00.303 [2024-12-13 13:07:41.068437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.303 [2024-12-13 13:07:41.068486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.562 [2024-12-13 13:07:41.081561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fcdd0 00:23:00.562 [2024-12-13 13:07:41.083380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.562 [2024-12-13 13:07:41.083474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:00.562 [2024-12-13 13:07:41.091709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ee5c8 00:23:00.562 [2024-12-13 13:07:41.093246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.562 [2024-12-13 13:07:41.093291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:00.562 [2024-12-13 13:07:41.101621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f57b0 00:23:00.562 [2024-12-13 13:07:41.102785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.562 [2024-12-13 13:07:41.102842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:00.562 [2024-12-13 13:07:41.111093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ed0b0 00:23:00.562 [2024-12-13 13:07:41.112297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.562 [2024-12-13 13:07:41.112342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.562 [2024-12-13 13:07:41.120721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f1430 00:23:00.562 [2024-12-13 13:07:41.121900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.562 [2024-12-13 13:07:41.121943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:00.562 [2024-12-13 13:07:41.131042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190df118 00:23:00.562 [2024-12-13 13:07:41.132596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.132642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.142292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f31b8 00:23:00.563 [2024-12-13 13:07:41.143307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.143339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.151548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ed0b0 00:23:00.563 [2024-12-13 13:07:41.152994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.153023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.161399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ecc78 00:23:00.563 [2024-12-13 13:07:41.162121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.162165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.171180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fc128 00:23:00.563 [2024-12-13 13:07:41.172100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.172144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.181028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e9168 00:23:00.563 [2024-12-13 13:07:41.182407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.182455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.190321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e8088 00:23:00.563 [2024-12-13 13:07:41.190856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.190889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.200098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e8088 00:23:00.563 [2024-12-13 13:07:41.201376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.201421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.209824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e99d8 00:23:00.563 [2024-12-13 13:07:41.210391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.210436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.219443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fa7d8 00:23:00.563 [2024-12-13 13:07:41.220199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.220245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.228747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190df988 00:23:00.563 [2024-12-13 13:07:41.229727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.229781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.238987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de038 00:23:00.563 [2024-12-13 13:07:41.239625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.239685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.248412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e27f0 00:23:00.563 [2024-12-13 13:07:41.248988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.249018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.257936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f8e88 00:23:00.563 [2024-12-13 13:07:41.258497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.258542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.268451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e99d8 00:23:00.563 [2024-12-13 13:07:41.269664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.269710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.276540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f20d8 00:23:00.563 [2024-12-13 13:07:41.277465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.277509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.286842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e84c0 00:23:00.563 [2024-12-13 13:07:41.287481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.287544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.297167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ecc78 00:23:00.563 [2024-12-13 13:07:41.298837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.298882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.306731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de038 00:23:00.563 [2024-12-13 13:07:41.308433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.308479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.316491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f4298 00:23:00.563 [2024-12-13 13:07:41.318178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.318223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.325895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f8e88 00:23:00.563 [2024-12-13 13:07:41.327614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.327674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:00.563 [2024-12-13 13:07:41.336023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f3a28 00:23:00.563 [2024-12-13 13:07:41.337570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.563 [2024-12-13 13:07:41.337616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.346186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fb8b8 00:23:00.823 [2024-12-13 13:07:41.347290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.347323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.356236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e99d8 00:23:00.823 [2024-12-13 13:07:41.357390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.357434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.365396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f2948 00:23:00.823 [2024-12-13 13:07:41.366314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.366359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.375554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190df118 00:23:00.823 [2024-12-13 13:07:41.376683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.376727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.385397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fcdd0 00:23:00.823 [2024-12-13 13:07:41.386362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.386408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.395852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ebfd0 00:23:00.823 [2024-12-13 13:07:41.396773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.396827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.405583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f20d8 00:23:00.823 [2024-12-13 13:07:41.406387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.406432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.415167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f1430 00:23:00.823 [2024-12-13 13:07:41.415981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.416025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.424825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f2948 00:23:00.823 [2024-12-13 13:07:41.425558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.425604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.434461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fe2e8 00:23:00.823 [2024-12-13 13:07:41.435180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.435228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.444133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f2510 00:23:00.823 [2024-12-13 13:07:41.444845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.444902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.453218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f5378 00:23:00.823 [2024-12-13 13:07:41.454607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.454651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.462322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f8e88 00:23:00.823 [2024-12-13 13:07:41.462471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.462488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.472303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de470 00:23:00.823 [2024-12-13 13:07:41.472636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.472670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.482590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e5220 00:23:00.823 [2024-12-13 13:07:41.482891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.482926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.492470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f31b8 00:23:00.823 [2024-12-13 13:07:41.492706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.492741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.503546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de8a8 00:23:00.823 [2024-12-13 13:07:41.505307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.505354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.513311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e23b8 00:23:00.823 [2024-12-13 13:07:41.515013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.515059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.522939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f8e88 00:23:00.823 [2024-12-13 13:07:41.524472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.524517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.532668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e7818 00:23:00.823 [2024-12-13 13:07:41.534252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.534297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.542339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f20d8 00:23:00.823 [2024-12-13 13:07:41.544097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.544143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.552483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190df988 00:23:00.823 [2024-12-13 13:07:41.553324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.823 [2024-12-13 13:07:41.553383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:00.823 [2024-12-13 13:07:41.562116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de8a8 00:23:00.823 [2024-12-13 13:07:41.563003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.824 [2024-12-13 13:07:41.563046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:00.824 [2024-12-13 13:07:41.570163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ee190 00:23:00.824 [2024-12-13 13:07:41.571618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.824 [2024-12-13 13:07:41.571663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:00.824 [2024-12-13 13:07:41.581860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fd208 00:23:00.824 [2024-12-13 13:07:41.582759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.824 [2024-12-13 13:07:41.582807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:00.824 [2024-12-13 13:07:41.591007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f7538 00:23:00.824 [2024-12-13 13:07:41.592359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.824 [2024-12-13 13:07:41.592403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.601682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fb8b8 00:23:01.083 [2024-12-13 13:07:41.602305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.083 [2024-12-13 13:07:41.602365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.614037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e0630 00:23:01.083 [2024-12-13 13:07:41.615269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.083 [2024-12-13 13:07:41.615332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.621230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fb048 00:23:01.083 [2024-12-13 13:07:41.621354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.083 [2024-12-13 13:07:41.621372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.630672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ef270 00:23:01.083 [2024-12-13 13:07:41.630792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.083 [2024-12-13 13:07:41.630810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.641689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f0bc0 00:23:01.083 [2024-12-13 13:07:41.643699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.083 [2024-12-13 13:07:41.643767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.652459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f7538 00:23:01.083 [2024-12-13 13:07:41.653560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.083 [2024-12-13 13:07:41.653606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.663697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fb048 00:23:01.083 [2024-12-13 13:07:41.664891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.083 [2024-12-13 13:07:41.664936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.670692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ff3c8 00:23:01.083 [2024-12-13 13:07:41.671873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.083 [2024-12-13 13:07:41.671916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.682240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e9e10 00:23:01.083 [2024-12-13 13:07:41.683021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.083 [2024-12-13 13:07:41.683064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:01.083 [2024-12-13 13:07:41.691524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e27f0 00:23:01.083 [2024-12-13 13:07:41.692794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.692847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.701760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f8618 00:23:01.084 [2024-12-13 13:07:41.702249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.702283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.715102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f57b0 00:23:01.084 [2024-12-13 13:07:41.716324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.716370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.725183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f0ff8 00:23:01.084 [2024-12-13 13:07:41.726460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.726505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.736242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e38d0 00:23:01.084 [2024-12-13 13:07:41.737378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.737425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.746110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f4b08 00:23:01.084 [2024-12-13 13:07:41.747625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.747672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.756169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190de038 00:23:01.084 [2024-12-13 13:07:41.757395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.757441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.765720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f2d80 00:23:01.084 [2024-12-13 13:07:41.766501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.766548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.775620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e84c0 00:23:01.084 [2024-12-13 13:07:41.777013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.777045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.785644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190df550 00:23:01.084 [2024-12-13 13:07:41.786591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.786637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.795512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e9e10 00:23:01.084 [2024-12-13 13:07:41.796848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.796878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.805340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e6300 00:23:01.084 [2024-12-13 13:07:41.806416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.806463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.817523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f9b30 00:23:01.084 [2024-12-13 13:07:41.818587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.818632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.826383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ed0b0 00:23:01.084 [2024-12-13 13:07:41.827497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.827545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.837310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fc998 00:23:01.084 [2024-12-13 13:07:41.838111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.838158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.845954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fac10 00:23:01.084 [2024-12-13 13:07:41.846837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.846889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:01.084 [2024-12-13 13:07:41.856382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fa7d8 00:23:01.084 [2024-12-13 13:07:41.857564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.084 [2024-12-13 13:07:41.857611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:01.343 [2024-12-13 13:07:41.866856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190ed920 00:23:01.343 [2024-12-13 13:07:41.867983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.343 [2024-12-13 13:07:41.868027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:01.343 [2024-12-13 13:07:41.877370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190eee38 00:23:01.343 [2024-12-13 13:07:41.877951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.343 [2024-12-13 13:07:41.878009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:01.343 [2024-12-13 13:07:41.889624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190e5658 00:23:01.343 [2024-12-13 13:07:41.890835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.343 [2024-12-13 13:07:41.890871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:01.343 [2024-12-13 13:07:41.897315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190fa3a0 00:23:01.343 [2024-12-13 13:07:41.897646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.343 [2024-12-13 13:07:41.897676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:01.343 [2024-12-13 13:07:41.910368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045a00) with pdu=0x2000190f7970 00:23:01.343 [2024-12-13 13:07:41.911408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.343 [2024-12-13 13:07:41.911459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.343 00:23:01.343 Latency(us) 00:23:01.343 [2024-12-13T13:07:42.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.343 [2024-12-13T13:07:42.119Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:01.343 nvme0n1 : 2.00 24884.86 97.21 0.00 0.00 5138.70 1832.03 13345.51 00:23:01.343 [2024-12-13T13:07:42.119Z] =================================================================================================================== 00:23:01.343 [2024-12-13T13:07:42.119Z] Total : 24884.86 97.21 0.00 0.00 5138.70 1832.03 13345.51 00:23:01.343 0 00:23:01.343 13:07:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:01.343 13:07:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:01.343 13:07:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:01.343 13:07:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:01.343 | .driver_specific 00:23:01.343 | .nvme_error 00:23:01.343 | .status_code 00:23:01.343 | .command_transient_transport_error' 00:23:01.607 13:07:42 -- host/digest.sh@71 -- # (( 195 > 0 )) 00:23:01.607 13:07:42 -- host/digest.sh@73 -- # killprocess 97568 00:23:01.607 13:07:42 -- common/autotest_common.sh@936 -- # '[' -z 97568 ']' 00:23:01.607 13:07:42 -- common/autotest_common.sh@940 -- # kill -0 97568 00:23:01.607 13:07:42 -- common/autotest_common.sh@941 -- # uname 00:23:01.607 13:07:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:01.607 13:07:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97568 00:23:01.607 13:07:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:01.607 13:07:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:01.607 killing process with pid 97568 00:23:01.607 13:07:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97568' 00:23:01.607 Received shutdown signal, test time was about 2.000000 seconds 00:23:01.607 00:23:01.607 Latency(us) 00:23:01.607 [2024-12-13T13:07:42.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.607 [2024-12-13T13:07:42.384Z] =================================================================================================================== 00:23:01.608 [2024-12-13T13:07:42.384Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.608 13:07:42 -- common/autotest_common.sh@955 -- # kill 97568 00:23:01.608 13:07:42 -- common/autotest_common.sh@960 -- # wait 97568 00:23:01.879 13:07:42 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:01.879 13:07:42 -- host/digest.sh@54 -- # local rw bs qd 00:23:01.879 13:07:42 -- host/digest.sh@56 -- # rw=randwrite 00:23:01.879 13:07:42 -- host/digest.sh@56 -- # bs=131072 00:23:01.879 13:07:42 -- host/digest.sh@56 -- # qd=16 00:23:01.879 13:07:42 -- host/digest.sh@58 -- # bperfpid=97654 00:23:01.879 13:07:42 -- host/digest.sh@60 -- # waitforlisten 97654 /var/tmp/bperf.sock 00:23:01.879 13:07:42 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:01.879 13:07:42 -- common/autotest_common.sh@829 -- # '[' -z 97654 ']' 00:23:01.879 13:07:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:01.879 13:07:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:01.879 13:07:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:01.879 13:07:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.879 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.879 [2024-12-13 13:07:42.508635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:01.879 [2024-12-13 13:07:42.508768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97654 ] 00:23:01.879 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:01.879 Zero copy mechanism will not be used. 00:23:01.879 [2024-12-13 13:07:42.646145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.137 [2024-12-13 13:07:42.702302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.071 13:07:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.071 13:07:43 -- common/autotest_common.sh@862 -- # return 0 00:23:03.071 13:07:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:03.071 13:07:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:03.071 13:07:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:03.071 13:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.071 13:07:43 -- common/autotest_common.sh@10 -- # set +x 00:23:03.071 13:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.071 13:07:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:03.071 13:07:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:03.329 nvme0n1 00:23:03.588 13:07:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:03.588 13:07:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.588 13:07:44 -- common/autotest_common.sh@10 -- # set +x 00:23:03.588 13:07:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.588 13:07:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:03.588 13:07:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:03.588 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:03.588 Zero copy mechanism will not be used. 00:23:03.588 Running I/O for 2 seconds... 00:23:03.588 [2024-12-13 13:07:44.260924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.261290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.261331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.265632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.265825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.265894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.269977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.270120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.270140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.274119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.274252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.274272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.278290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.278410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.278430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.282365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.282474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.282494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.286686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.286882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.286903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.290921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.291193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.291223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.294925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.295229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.295251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.299171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.299340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.299361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.303374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.303519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.303555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.307717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.307876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.307897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.311907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.312025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.312044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.316170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.316318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.316338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.320429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.320574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.320594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.324936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.325163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.325199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.329094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.329314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.329334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.333317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.333467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.333487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.337472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.337603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.337622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.341842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.341972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.341992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.346024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.346130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.346164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.350291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.350433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.350452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.354526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.354671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.354690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.588 [2024-12-13 13:07:44.358967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.588 [2024-12-13 13:07:44.359245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.588 [2024-12-13 13:07:44.359269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.589 [2024-12-13 13:07:44.363521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.589 [2024-12-13 13:07:44.363749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.589 [2024-12-13 13:07:44.363769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.848 [2024-12-13 13:07:44.368105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.848 [2024-12-13 13:07:44.368280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.368317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.372558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.372649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.372671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.376835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.376961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.376980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.381040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.381179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.381198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.385228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.385365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.385384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.389324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.389466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.389486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.393415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.393633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.393652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.397454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.397663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.397682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.401642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.401828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.401859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.405732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.405857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.405875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.409813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.409936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.409955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.414072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.414184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.414203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.418173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.418310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.418329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.422227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.422368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.422387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.426320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.426535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.426554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.430420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.430631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.430650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.434458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.434595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.434615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.438459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.438571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.438590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.442429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.442560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.442579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.446447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.446568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.446589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.450475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.450612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.450631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.454457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.454599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.454618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.458533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.458750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.458780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.462570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.462793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.462813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.466550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.466708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.466727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.470559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.470697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.470716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.474496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.474608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.474627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.849 [2024-12-13 13:07:44.478537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.849 [2024-12-13 13:07:44.478650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.849 [2024-12-13 13:07:44.478669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.482518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.482657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.482676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.486546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.486685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.486704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.490662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.490895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.490916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.494579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.494798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.494817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.498598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.498735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.498755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.502607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.502735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.502754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.506666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.506798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.506828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.510656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.510785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.510814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.514641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.514788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.514807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.518635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.518786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.518805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.522665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.522913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.522994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.526575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.526792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.526811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.530589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.530729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.530749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.534693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.534818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.534838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.538789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.538898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.538918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.542820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.542934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.542953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.546821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.546964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.546984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.550881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.551024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.551043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.555027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.555280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.555307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.558905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.559097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.559176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.563001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.563207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.563228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.567409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.567572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.567592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.572047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.572213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.572232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.576113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.576272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.576290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.580291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.580427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.580446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.584343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.584482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.584501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.588464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.588681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.588700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.850 [2024-12-13 13:07:44.592510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.850 [2024-12-13 13:07:44.592719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.850 [2024-12-13 13:07:44.592737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.851 [2024-12-13 13:07:44.596587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.851 [2024-12-13 13:07:44.596743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.851 [2024-12-13 13:07:44.596762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.851 [2024-12-13 13:07:44.600733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.851 [2024-12-13 13:07:44.600869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.851 [2024-12-13 13:07:44.600888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.851 [2024-12-13 13:07:44.604774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.851 [2024-12-13 13:07:44.604900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.851 [2024-12-13 13:07:44.604919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.851 [2024-12-13 13:07:44.608868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.851 [2024-12-13 13:07:44.608982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.851 [2024-12-13 13:07:44.609001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.851 [2024-12-13 13:07:44.613016] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.851 [2024-12-13 13:07:44.613152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.851 [2024-12-13 13:07:44.613171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.851 [2024-12-13 13:07:44.617030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.851 [2024-12-13 13:07:44.617156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.851 [2024-12-13 13:07:44.617175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.851 [2024-12-13 13:07:44.621343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:03.851 [2024-12-13 13:07:44.621584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.851 [2024-12-13 13:07:44.621604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.625721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.625928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.625948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.630044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.630172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.630192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.634134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.634268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.634287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.638190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.638330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.638350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.642234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.642365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.642383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.646356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.646499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.646518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.650503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.650650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.650670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.654629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.654862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.654883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.658655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.658930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.658976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.662727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.662883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.111 [2024-12-13 13:07:44.662904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.111 [2024-12-13 13:07:44.666612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.111 [2024-12-13 13:07:44.666722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.666741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.670634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.670743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.670763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.674579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.674738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.674757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.678551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.678687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.678707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.682511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.682657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.682676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.686681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.686919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.686957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.690676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.690894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.690913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.694647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.694798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.694817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.698560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.698676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.698695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.702571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.702681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.702700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.706553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.706665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.706684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.710579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.710722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.710742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.714674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.714839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.714859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.718695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.718923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.718943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.722669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.722889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.722908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.726623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.726763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.726793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.730650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.730762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.730781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.734534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.734663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.734682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.738470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.738607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.738626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.742534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.742673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.742692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.746564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.746706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.746725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.750815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.751046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.751066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.754757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.754999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.755051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.758803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.758963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.758981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.762725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.762854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.762873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.766686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.766818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.766838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.770755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.770897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.770916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.774752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.112 [2024-12-13 13:07:44.774895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.112 [2024-12-13 13:07:44.774914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.112 [2024-12-13 13:07:44.778682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.778837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.778856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.782700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.782925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.782945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.786601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.786848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.786867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.790637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.790777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.790796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.794537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.794646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.794665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.798481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.798602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.798623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.802461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.802589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.802608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.806462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.806600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.806619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.810661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.810814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.810834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.814701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.814935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.814954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.818730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.818945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.818964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.822758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.822913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.822932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.827040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.827218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.827253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.831624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.831766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.831786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.835616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.835745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.835776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.839739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.839921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.839941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.843811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.843974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.843994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.847920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.848142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.848163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.851954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.852167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.852185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.856020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.856176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.856195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.860072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.860184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.860203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.864199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.864309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.864328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.868181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.868293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.868312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.872316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.872455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.872475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.876463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.876603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.876622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.880596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.880831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.880850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.113 [2024-12-13 13:07:44.884979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.113 [2024-12-13 13:07:44.885214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.113 [2024-12-13 13:07:44.885254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.889279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.889435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.889454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.893647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.893782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.893801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.897676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.897823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.897843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.901811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.901921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.901940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.905927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.906073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.906092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.910029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.910154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.910173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.914138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.914352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.914371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.918157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.918385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.918403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.922375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.922533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.922551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.926512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.926645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.926664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.930616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.930728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.930748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.934570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.934687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.934706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.938584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.938728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.938747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.942637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.942787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.942806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.946662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.946885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.946905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.950730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.950963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.950982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.954664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.954813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.954832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.958654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.958782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.958802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.962581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.962699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.962718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.966508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.966644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.966663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.970558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.970702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.374 [2024-12-13 13:07:44.970721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.374 [2024-12-13 13:07:44.974673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.374 [2024-12-13 13:07:44.974840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:44.974860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:44.978786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:44.978991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:44.979042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:44.982760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:44.982981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:44.983001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:44.986619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:44.986757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:44.986787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:44.990654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:44.990774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:44.990794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:44.994544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:44.994654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:44.994672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:44.998487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:44.998596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:44.998616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.002558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.002697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.002716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.006727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.006883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.006903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.010884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.011114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.011166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.014819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.015015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.015050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.018818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.018959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.018979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.022808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.022939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.022958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.026868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.026967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.026988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.030901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.031011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.031030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.035028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.035253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.035276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.038934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.039073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.039093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.042950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.043194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.043216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.046990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.047226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.047252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.050952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.051083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.051103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.054844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.054952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.054971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.058734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.058870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.058889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.062637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.062757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.062788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.066672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.066857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.066877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.070627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.070791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.070822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.074678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.074909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.074952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.078545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.078761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.078791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.082662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.082872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.375 [2024-12-13 13:07:45.082907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.375 [2024-12-13 13:07:45.086881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.375 [2024-12-13 13:07:45.086996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.087026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.090998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.091167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.091188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.094909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.095019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.095038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.098862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.099029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.099049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.102882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.103072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.103092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.106881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.107096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.107179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.110819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.111008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.111026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.114838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.114999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.115018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.118730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.118864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.118884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.122780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.122906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.122926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.126713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.126838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.126858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.131242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.131419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.135522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.135681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.135701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.140217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.140439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.140459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.376 [2024-12-13 13:07:45.144703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.376 [2024-12-13 13:07:45.144966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.376 [2024-12-13 13:07:45.144988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.635 [2024-12-13 13:07:45.149457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.635 [2024-12-13 13:07:45.149667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.635 [2024-12-13 13:07:45.149687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.635 [2024-12-13 13:07:45.153829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.635 [2024-12-13 13:07:45.153940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.635 [2024-12-13 13:07:45.153961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.635 [2024-12-13 13:07:45.158192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.635 [2024-12-13 13:07:45.158321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.635 [2024-12-13 13:07:45.158341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.635 [2024-12-13 13:07:45.162497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.635 [2024-12-13 13:07:45.162609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.635 [2024-12-13 13:07:45.162628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.635 [2024-12-13 13:07:45.166518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.166681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.166700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.170538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.170683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.170702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.174653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.174883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.174903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.178715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.178963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.179001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.182693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.182899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.182919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.186716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.186860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.186880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.190672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.190811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.190831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.194549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.194659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.194679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.198507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.198668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.198687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.202484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.202617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.202636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.206653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.206884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.206904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.210494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.210705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.210723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.214573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.214758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.214788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.218605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.218727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.218747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.222645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.222770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.222790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.226517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.226628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.226647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.230548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.230699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.230718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.234503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.234619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.234638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.238514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.238709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.238728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.242385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.242563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.242582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.246451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.246598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.246617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.250381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.250480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.250499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.254370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.254469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.254488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.258438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.258537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.258556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.262442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.262605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.262625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.266392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.266536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.266555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.270542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.636 [2024-12-13 13:07:45.270736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.636 [2024-12-13 13:07:45.270755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.636 [2024-12-13 13:07:45.274434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.274612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.274630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.278468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.278617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.278636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.282430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.282538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.282557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.286248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.286348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.286367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.290224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.290324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.290343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.294253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.294403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.294422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.298224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.298352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.298371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.302314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.302517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.302536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.306289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.306504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.306522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.310329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.310493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.310512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.314264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.314383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.314401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.318155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.318254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.318273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.322130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.322224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.322243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.326042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.326195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.326214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.330056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.330178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.330197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.334079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.334276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.334295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.338111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.338303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.338322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.342339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.342536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.342555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.346677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.346811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.346832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.350601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.350696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.350714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.354620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.354716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.354735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.358584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.358728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.358748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.362565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.362665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.362684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.366598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.366823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.366843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.370538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.370771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.370790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.374528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.374700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.374719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.378851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.378967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.379002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.383179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.637 [2024-12-13 13:07:45.383313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.637 [2024-12-13 13:07:45.383335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.637 [2024-12-13 13:07:45.387913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.638 [2024-12-13 13:07:45.388018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.638 [2024-12-13 13:07:45.388039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.638 [2024-12-13 13:07:45.392415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.638 [2024-12-13 13:07:45.392564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.638 [2024-12-13 13:07:45.392584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.638 [2024-12-13 13:07:45.396940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.638 [2024-12-13 13:07:45.397069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.638 [2024-12-13 13:07:45.397105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.638 [2024-12-13 13:07:45.401418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.638 [2024-12-13 13:07:45.401617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.638 [2024-12-13 13:07:45.401637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.638 [2024-12-13 13:07:45.405716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.638 [2024-12-13 13:07:45.406002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.638 [2024-12-13 13:07:45.406024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.638 [2024-12-13 13:07:45.410207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.638 [2024-12-13 13:07:45.410380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.638 [2024-12-13 13:07:45.410399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.414400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.414497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.414516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.418767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.418878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.418898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.423031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.423167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.423189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.427182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.427347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.427369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.431325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.431627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.431648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.435698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.436144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.436171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.440383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.440577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.440597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.444668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.444869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.444889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.448882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.448998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.449019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.897 [2024-12-13 13:07:45.452963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.897 [2024-12-13 13:07:45.453078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.897 [2024-12-13 13:07:45.453098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.457274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.457385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.457405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.461434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.461600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.461620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.465567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.465687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.465706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.469969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.470200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.470220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.474029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.474244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.474263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.478272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.478459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.478479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.482369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.482490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.482510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.486423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.486545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.486565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.490582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.490697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.490717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.494957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.495143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.495164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.499152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.499307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.499328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.503349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.503606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.503658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.507663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.507887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.507906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.512036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.512202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.512221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.516173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.516306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.516325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.520265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.520381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.520401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.524443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.524561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.524581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.528898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.529068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.529088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.533084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.533240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.533260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.537440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.537656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.537676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.541600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.541826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.541846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.546135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.546317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.546337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.550248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.550362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.550382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.554361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.554481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.554501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.558766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.558890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.558910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.562826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.562994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.563013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.566909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.567040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.898 [2024-12-13 13:07:45.567059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.898 [2024-12-13 13:07:45.571233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.898 [2024-12-13 13:07:45.571461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.571494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.575383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.575633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.575652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.579729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.579962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.579982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.583978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.584094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.584127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.588106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.588249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.588269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.592137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.592263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.592282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.596259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.596432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.596451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.600292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.600436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.600456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.604486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.604699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.604718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.608579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.608774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.608793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.612694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.612894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.612930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.616848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.616982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.617001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.620766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.620875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.620894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.624669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.624790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.624809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.628670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.628855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.628875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.632549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.632688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.632707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.636641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.636868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.636887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.640637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.640844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.640863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.644710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.644903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.644922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.648872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.648987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.649007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.653101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.653199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.653219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.657222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.657329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.657347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.661363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.661526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.661546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.665389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.665535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.665554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.899 [2024-12-13 13:07:45.669754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:04.899 [2024-12-13 13:07:45.670026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.899 [2024-12-13 13:07:45.670064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.159 [2024-12-13 13:07:45.674075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.159 [2024-12-13 13:07:45.674275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.674310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.678322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.678496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.678515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.682380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.682514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.682533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.686317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.686441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.686460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.690436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.690551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.690570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.694483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.694643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.694662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.698484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.698605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.698624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.702565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.702804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.702825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.706437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.706627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.706647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.710580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.710747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.710767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.714506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.714616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.714636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.718526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.718655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.718674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.722572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.722667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.722686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.726483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.726647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.726667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.730568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.730715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.730734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.734583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.734808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.734827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.738572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.738791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.738821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.742507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.742709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.742728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.746513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.746638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.746657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.750563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.750668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.750687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.754599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.754713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.754733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.758623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.758797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.758817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.762533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.762675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.762694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.766692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.766934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.766960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.770685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.770903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.770932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.774682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.160 [2024-12-13 13:07:45.774876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.160 [2024-12-13 13:07:45.774896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.160 [2024-12-13 13:07:45.778783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.778881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.778899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.782621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.782750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.782770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.786517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.786632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.786651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.790570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.790732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.790751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.794638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.794789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.794824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.798594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.798822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.798841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.802550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.802745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.802764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.806549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.806729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.806748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.810559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.810683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.810702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.814547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.814673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.814692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.818537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.818646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.818665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.822447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.822610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.822629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.826400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.826531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.826549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.830453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.830667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.830687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.834369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.834577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.834596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.838235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.838420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.838439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.842209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.842322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.842341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.846212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.846365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.846384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.850213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.850324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.850343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.854211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.854374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.854393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.858417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.858555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.858574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.862487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.862701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.862721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.866491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.866703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.866722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.870539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.870713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.870732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.874468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.874606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.874626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.878646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.878785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.878805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.882721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.882845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.882864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.161 [2024-12-13 13:07:45.886802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.161 [2024-12-13 13:07:45.886976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.161 [2024-12-13 13:07:45.886996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.890924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.891091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.891135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.894861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.895077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.895096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.898828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.899046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.899065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.902973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.903168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.903189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.906977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.907089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.907133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.910933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.911044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.911063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.914994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.915117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.915154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.919045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.919261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.919283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.923178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.923293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.923315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.927239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.927482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.927530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.162 [2024-12-13 13:07:45.931578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.162 [2024-12-13 13:07:45.931778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.162 [2024-12-13 13:07:45.931798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.422 [2024-12-13 13:07:45.936162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.422 [2024-12-13 13:07:45.936328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.422 [2024-12-13 13:07:45.936347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.422 [2024-12-13 13:07:45.940434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.422 [2024-12-13 13:07:45.940561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.422 [2024-12-13 13:07:45.940580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.422 [2024-12-13 13:07:45.944650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.422 [2024-12-13 13:07:45.944756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.422 [2024-12-13 13:07:45.944787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.422 [2024-12-13 13:07:45.948629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.422 [2024-12-13 13:07:45.948739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.422 [2024-12-13 13:07:45.948758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.422 [2024-12-13 13:07:45.952678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.422 [2024-12-13 13:07:45.952860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.422 [2024-12-13 13:07:45.952879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.956645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.956801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.956820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.960870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.961081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.961101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.964876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.965086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.965105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.968852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.969026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.969045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.972821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.972958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.972978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.976719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.976855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.976874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.980617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.980735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.980754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.984618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.984793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.984812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.988690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.988846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.988866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.992806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.993023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.993042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:45.996807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:45.997030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:45.997048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.000852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.001041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.001060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.004734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.004865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.004884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.008695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.008835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.008853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.012706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.012850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.012870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.016895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.017059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.017079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.021172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.021346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.021365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.025568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.025792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.025811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.029595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.029801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.029820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.033608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.033823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.033844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.037770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.037885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.037905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.041731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.041862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.041880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.045815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.045926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.045945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.049844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.050008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.050027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.053831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.053963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.053981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.058044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.058274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.058293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.062080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.062320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.423 [2024-12-13 13:07:46.062339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.423 [2024-12-13 13:07:46.066198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.423 [2024-12-13 13:07:46.066384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.066403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.070249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.070370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.070389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.074222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.074332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.074351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.078225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.078341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.078360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.082247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.082407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.082426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.086265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.086450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.086469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.090354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.090565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.090585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.094321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.094568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.094603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.098411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.098597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.098615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.102435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.102548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.102567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.106344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.106480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.106500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.110455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.110564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.110582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.114440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.114599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.114618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.118433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.118568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.118587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.122589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.122814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.122833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.126613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.126842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.126861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.130655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.130849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.130869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.134623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.134732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.134751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.138547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.138668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.138687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.142387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.142495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.142514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.146438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.146608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.146626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.150498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.150626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.150647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.154807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.155047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.155068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.159353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.159603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.159649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.163993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.164222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.164279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.168591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.168702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.168721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.173010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.173126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.173177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.177416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.424 [2024-12-13 13:07:46.177536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.424 [2024-12-13 13:07:46.177556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.424 [2024-12-13 13:07:46.181874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.425 [2024-12-13 13:07:46.182055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.425 [2024-12-13 13:07:46.182077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.425 [2024-12-13 13:07:46.186322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.425 [2024-12-13 13:07:46.186454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.425 [2024-12-13 13:07:46.186473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.425 [2024-12-13 13:07:46.190655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.425 [2024-12-13 13:07:46.190930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.425 [2024-12-13 13:07:46.190995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.425 [2024-12-13 13:07:46.195194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.425 [2024-12-13 13:07:46.195505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.425 [2024-12-13 13:07:46.195550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.199745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.199941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.199961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.203868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.203980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.204016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.207958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.208067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.208087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.212019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.212123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.212142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.216051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.216211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.216230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.220047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.220199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.220218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.224253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.224467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.224486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.228258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.228502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.228568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.232281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.232454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.232473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.236252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.236374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.236394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.240257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.240391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.240410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.684 [2024-12-13 13:07:46.244243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2045ba0) with pdu=0x2000190fef90 00:23:05.684 [2024-12-13 13:07:46.244366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.684 [2024-12-13 13:07:46.244386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.684 00:23:05.684 Latency(us) 00:23:05.684 [2024-12-13T13:07:46.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.684 [2024-12-13T13:07:46.460Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:05.684 nvme0n1 : 2.00 7526.89 940.86 0.00 0.00 2121.18 1660.74 12273.11 00:23:05.684 [2024-12-13T13:07:46.460Z] =================================================================================================================== 00:23:05.684 [2024-12-13T13:07:46.460Z] Total : 7526.89 940.86 0.00 0.00 2121.18 1660.74 12273.11 00:23:05.684 0 00:23:05.684 13:07:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:05.684 13:07:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:05.684 13:07:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:05.684 | .driver_specific 00:23:05.684 | .nvme_error 00:23:05.684 | .status_code 00:23:05.684 | .command_transient_transport_error' 00:23:05.684 13:07:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:05.943 13:07:46 -- host/digest.sh@71 -- # (( 485 > 0 )) 00:23:05.943 13:07:46 -- host/digest.sh@73 -- # killprocess 97654 00:23:05.943 13:07:46 -- common/autotest_common.sh@936 -- # '[' -z 97654 ']' 00:23:05.943 13:07:46 -- common/autotest_common.sh@940 -- # kill -0 97654 00:23:05.943 13:07:46 -- common/autotest_common.sh@941 -- # uname 00:23:05.943 13:07:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.943 13:07:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97654 00:23:05.943 13:07:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:05.943 killing process with pid 97654 00:23:05.943 Received shutdown signal, test time was about 2.000000 seconds 00:23:05.943 00:23:05.943 Latency(us) 00:23:05.943 [2024-12-13T13:07:46.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.943 [2024-12-13T13:07:46.719Z] =================================================================================================================== 00:23:05.943 [2024-12-13T13:07:46.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.943 13:07:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:05.943 13:07:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97654' 00:23:05.943 13:07:46 -- common/autotest_common.sh@955 -- # kill 97654 00:23:05.943 13:07:46 -- common/autotest_common.sh@960 -- # wait 97654 00:23:05.943 13:07:46 -- host/digest.sh@115 -- # killprocess 97358 00:23:05.943 13:07:46 -- common/autotest_common.sh@936 -- # '[' -z 97358 ']' 00:23:05.943 13:07:46 -- common/autotest_common.sh@940 -- # kill -0 97358 00:23:05.943 13:07:46 -- common/autotest_common.sh@941 -- # uname 00:23:05.943 13:07:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.943 13:07:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97358 00:23:06.202 13:07:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:06.202 killing process with pid 97358 00:23:06.202 13:07:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:06.202 13:07:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97358' 00:23:06.202 13:07:46 -- common/autotest_common.sh@955 -- # kill 97358 00:23:06.202 13:07:46 -- common/autotest_common.sh@960 -- # wait 97358 00:23:06.202 00:23:06.202 real 0m17.877s 00:23:06.202 user 0m34.708s 00:23:06.202 sys 0m4.772s 00:23:06.202 ************************************ 00:23:06.202 END TEST nvmf_digest_error 00:23:06.202 ************************************ 00:23:06.202 13:07:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:06.202 13:07:46 -- common/autotest_common.sh@10 -- # set +x 00:23:06.202 13:07:46 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:06.202 13:07:46 -- host/digest.sh@139 -- # nvmftestfini 00:23:06.203 13:07:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:06.203 13:07:46 -- nvmf/common.sh@116 -- # sync 00:23:06.461 13:07:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:06.461 13:07:47 -- nvmf/common.sh@119 -- # set +e 00:23:06.461 13:07:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:06.461 13:07:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:06.461 rmmod nvme_tcp 00:23:06.461 rmmod nvme_fabrics 00:23:06.461 rmmod nvme_keyring 00:23:06.461 13:07:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:06.461 13:07:47 -- nvmf/common.sh@123 -- # set -e 00:23:06.461 13:07:47 -- nvmf/common.sh@124 -- # return 0 00:23:06.461 13:07:47 -- nvmf/common.sh@477 -- # '[' -n 97358 ']' 00:23:06.461 13:07:47 -- nvmf/common.sh@478 -- # killprocess 97358 00:23:06.461 13:07:47 -- common/autotest_common.sh@936 -- # '[' -z 97358 ']' 00:23:06.461 13:07:47 -- common/autotest_common.sh@940 -- # kill -0 97358 00:23:06.461 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97358) - No such process 00:23:06.461 Process with pid 97358 is not found 00:23:06.461 13:07:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97358 is not found' 00:23:06.461 13:07:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:06.461 13:07:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:06.461 13:07:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:06.461 13:07:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.461 13:07:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:06.461 13:07:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.461 13:07:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.461 13:07:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.461 13:07:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:06.461 00:23:06.461 real 0m35.117s 00:23:06.461 user 1m6.096s 00:23:06.461 sys 0m9.798s 00:23:06.461 13:07:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:06.461 ************************************ 00:23:06.461 END TEST nvmf_digest 00:23:06.461 13:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:06.461 ************************************ 00:23:06.461 13:07:47 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:06.461 13:07:47 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:06.461 13:07:47 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:06.461 13:07:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:06.461 13:07:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:06.461 13:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:06.461 ************************************ 00:23:06.461 START TEST nvmf_mdns_discovery 00:23:06.461 ************************************ 00:23:06.461 13:07:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:06.721 * Looking for test storage... 00:23:06.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:06.721 13:07:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:06.721 13:07:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:06.721 13:07:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:06.721 13:07:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:06.721 13:07:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:06.721 13:07:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:06.721 13:07:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:06.721 13:07:47 -- scripts/common.sh@335 -- # IFS=.-: 00:23:06.721 13:07:47 -- scripts/common.sh@335 -- # read -ra ver1 00:23:06.721 13:07:47 -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.721 13:07:47 -- scripts/common.sh@336 -- # read -ra ver2 00:23:06.721 13:07:47 -- scripts/common.sh@337 -- # local 'op=<' 00:23:06.721 13:07:47 -- scripts/common.sh@339 -- # ver1_l=2 00:23:06.721 13:07:47 -- scripts/common.sh@340 -- # ver2_l=1 00:23:06.721 13:07:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:06.721 13:07:47 -- scripts/common.sh@343 -- # case "$op" in 00:23:06.721 13:07:47 -- scripts/common.sh@344 -- # : 1 00:23:06.721 13:07:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:06.721 13:07:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.721 13:07:47 -- scripts/common.sh@364 -- # decimal 1 00:23:06.721 13:07:47 -- scripts/common.sh@352 -- # local d=1 00:23:06.721 13:07:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.721 13:07:47 -- scripts/common.sh@354 -- # echo 1 00:23:06.721 13:07:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:06.721 13:07:47 -- scripts/common.sh@365 -- # decimal 2 00:23:06.721 13:07:47 -- scripts/common.sh@352 -- # local d=2 00:23:06.721 13:07:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.721 13:07:47 -- scripts/common.sh@354 -- # echo 2 00:23:06.721 13:07:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:06.721 13:07:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:06.721 13:07:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:06.721 13:07:47 -- scripts/common.sh@367 -- # return 0 00:23:06.721 13:07:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.721 13:07:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:06.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.721 --rc genhtml_branch_coverage=1 00:23:06.721 --rc genhtml_function_coverage=1 00:23:06.721 --rc genhtml_legend=1 00:23:06.721 --rc geninfo_all_blocks=1 00:23:06.721 --rc geninfo_unexecuted_blocks=1 00:23:06.721 00:23:06.721 ' 00:23:06.721 13:07:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:06.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.721 --rc genhtml_branch_coverage=1 00:23:06.721 --rc genhtml_function_coverage=1 00:23:06.721 --rc genhtml_legend=1 00:23:06.721 --rc geninfo_all_blocks=1 00:23:06.721 --rc geninfo_unexecuted_blocks=1 00:23:06.721 00:23:06.721 ' 00:23:06.721 13:07:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:06.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.721 --rc genhtml_branch_coverage=1 00:23:06.721 --rc genhtml_function_coverage=1 00:23:06.721 --rc genhtml_legend=1 00:23:06.721 --rc geninfo_all_blocks=1 00:23:06.721 --rc geninfo_unexecuted_blocks=1 00:23:06.721 00:23:06.721 ' 00:23:06.721 13:07:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:06.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.721 --rc genhtml_branch_coverage=1 00:23:06.721 --rc genhtml_function_coverage=1 00:23:06.721 --rc genhtml_legend=1 00:23:06.721 --rc geninfo_all_blocks=1 00:23:06.721 --rc geninfo_unexecuted_blocks=1 00:23:06.721 00:23:06.721 ' 00:23:06.721 13:07:47 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:06.721 13:07:47 -- nvmf/common.sh@7 -- # uname -s 00:23:06.721 13:07:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.721 13:07:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.721 13:07:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.721 13:07:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.721 13:07:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.721 13:07:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.721 13:07:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.721 13:07:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.721 13:07:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.721 13:07:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.721 13:07:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:23:06.721 13:07:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:23:06.721 13:07:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.721 13:07:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.721 13:07:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:06.721 13:07:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:06.721 13:07:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.721 13:07:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.721 13:07:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.721 13:07:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.721 13:07:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.721 13:07:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.721 13:07:47 -- paths/export.sh@5 -- # export PATH 00:23:06.721 13:07:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.721 13:07:47 -- nvmf/common.sh@46 -- # : 0 00:23:06.721 13:07:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:06.721 13:07:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:06.721 13:07:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:06.721 13:07:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.721 13:07:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.721 13:07:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:06.721 13:07:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:06.721 13:07:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:06.721 13:07:47 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:06.721 13:07:47 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:06.722 13:07:47 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:06.722 13:07:47 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:06.722 13:07:47 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:06.722 13:07:47 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:06.722 13:07:47 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:06.722 13:07:47 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:06.722 13:07:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:06.722 13:07:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.722 13:07:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:06.722 13:07:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:06.722 13:07:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:06.722 13:07:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.722 13:07:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.722 13:07:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.722 13:07:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:06.722 13:07:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:06.722 13:07:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:06.722 13:07:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:06.722 13:07:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:06.722 13:07:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:06.722 13:07:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.722 13:07:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.722 13:07:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:06.722 13:07:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:06.722 13:07:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:06.722 13:07:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:06.722 13:07:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:06.722 13:07:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.722 13:07:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:06.722 13:07:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:06.722 13:07:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:06.722 13:07:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:06.722 13:07:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:06.722 13:07:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:06.722 Cannot find device "nvmf_tgt_br" 00:23:06.722 13:07:47 -- nvmf/common.sh@154 -- # true 00:23:06.722 13:07:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:06.722 Cannot find device "nvmf_tgt_br2" 00:23:06.722 13:07:47 -- nvmf/common.sh@155 -- # true 00:23:06.722 13:07:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:06.722 13:07:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:06.722 Cannot find device "nvmf_tgt_br" 00:23:06.722 13:07:47 -- nvmf/common.sh@157 -- # true 00:23:06.722 13:07:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:06.722 Cannot find device "nvmf_tgt_br2" 00:23:06.722 13:07:47 -- nvmf/common.sh@158 -- # true 00:23:06.722 13:07:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:06.981 13:07:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:06.981 13:07:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:06.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.981 13:07:47 -- nvmf/common.sh@161 -- # true 00:23:06.981 13:07:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:06.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.981 13:07:47 -- nvmf/common.sh@162 -- # true 00:23:06.981 13:07:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:06.981 13:07:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:06.981 13:07:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:06.981 13:07:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:06.981 13:07:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:06.981 13:07:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:06.981 13:07:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:06.981 13:07:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:06.981 13:07:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:06.981 13:07:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:06.981 13:07:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:06.981 13:07:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:06.981 13:07:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:06.981 13:07:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:06.981 13:07:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:06.981 13:07:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:06.981 13:07:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:06.981 13:07:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:06.981 13:07:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:06.981 13:07:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:06.981 13:07:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:06.981 13:07:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:06.981 13:07:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:06.981 13:07:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:06.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:23:06.981 00:23:06.981 --- 10.0.0.2 ping statistics --- 00:23:06.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.981 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:06.981 13:07:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:06.981 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:06.981 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:23:06.981 00:23:06.981 --- 10.0.0.3 ping statistics --- 00:23:06.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.981 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:23:06.981 13:07:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:06.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:06.981 00:23:06.981 --- 10.0.0.1 ping statistics --- 00:23:06.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.981 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:06.981 13:07:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.981 13:07:47 -- nvmf/common.sh@421 -- # return 0 00:23:06.981 13:07:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:06.981 13:07:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.981 13:07:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:06.981 13:07:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:06.981 13:07:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.981 13:07:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:06.981 13:07:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:06.981 13:07:47 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:06.981 13:07:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:06.981 13:07:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:06.981 13:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:06.981 13:07:47 -- nvmf/common.sh@469 -- # nvmfpid=97952 00:23:06.981 13:07:47 -- nvmf/common.sh@470 -- # waitforlisten 97952 00:23:06.981 13:07:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:06.981 13:07:47 -- common/autotest_common.sh@829 -- # '[' -z 97952 ']' 00:23:06.981 13:07:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.981 13:07:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.981 13:07:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.981 13:07:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.981 13:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:07.240 [2024-12-13 13:07:47.791188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:07.240 [2024-12-13 13:07:47.791280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.240 [2024-12-13 13:07:47.932167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.240 [2024-12-13 13:07:47.999955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:07.240 [2024-12-13 13:07:48.000123] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.240 [2024-12-13 13:07:48.000141] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.240 [2024-12-13 13:07:48.000153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.240 [2024-12-13 13:07:48.000182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.499 13:07:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.499 13:07:48 -- common/autotest_common.sh@862 -- # return 0 00:23:07.499 13:07:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:07.499 13:07:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 13:07:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:07.499 13:07:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 13:07:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:07.499 13:07:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 13:07:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.499 13:07:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 [2024-12-13 13:07:48.184642] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.499 13:07:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:07.499 13:07:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 [2024-12-13 13:07:48.192814] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:07.499 13:07:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:07.499 13:07:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 null0 00:23:07.499 13:07:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:07.499 13:07:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 null1 00:23:07.499 13:07:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:07.499 13:07:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 null2 00:23:07.499 13:07:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:07.499 13:07:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 null3 00:23:07.499 13:07:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:07.499 13:07:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.499 13:07:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@47 -- # hostpid=97994 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:07.499 13:07:48 -- host/mdns_discovery.sh@48 -- # waitforlisten 97994 /tmp/host.sock 00:23:07.499 13:07:48 -- common/autotest_common.sh@829 -- # '[' -z 97994 ']' 00:23:07.499 13:07:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:07.499 13:07:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.499 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:07.499 13:07:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:07.499 13:07:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.499 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.758 [2024-12-13 13:07:48.295597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:07.758 [2024-12-13 13:07:48.295693] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97994 ] 00:23:07.758 [2024-12-13 13:07:48.431804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.758 [2024-12-13 13:07:48.489562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:07.758 [2024-12-13 13:07:48.489726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.694 13:07:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.694 13:07:49 -- common/autotest_common.sh@862 -- # return 0 00:23:08.694 13:07:49 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:08.694 13:07:49 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:08.694 13:07:49 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:08.694 13:07:49 -- host/mdns_discovery.sh@57 -- # avahipid=98024 00:23:08.694 13:07:49 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:08.694 13:07:49 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:08.694 13:07:49 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:08.694 Process 1062 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:08.694 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:08.694 Successfully dropped root privileges. 00:23:08.694 avahi-daemon 0.8 starting up. 00:23:08.694 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:08.694 Successfully called chroot(). 00:23:08.694 Successfully dropped remaining capabilities. 00:23:08.694 No service file found in /etc/avahi/services. 00:23:09.629 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:09.629 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:09.629 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:09.629 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:09.629 Network interface enumeration completed. 00:23:09.629 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:09.629 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:09.629 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:09.629 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:09.630 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2858006521. 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:09.888 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.888 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:09.888 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:09.888 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.888 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:09.888 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:09.888 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@68 -- # sort 00:23:09.888 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@68 -- # xargs 00:23:09.888 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.888 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:09.888 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@64 -- # sort 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@64 -- # xargs 00:23:09.888 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:09.888 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.888 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:09.888 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:09.888 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.888 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@68 -- # sort 00:23:09.888 13:07:50 -- host/mdns_discovery.sh@68 -- # xargs 00:23:09.889 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.889 13:07:50 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:09.889 13:07:50 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:09.889 13:07:50 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.889 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.889 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:09.889 13:07:50 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:09.889 13:07:50 -- host/mdns_discovery.sh@64 -- # sort 00:23:09.889 13:07:50 -- host/mdns_discovery.sh@64 -- # xargs 00:23:09.889 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@68 -- # sort 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@68 -- # xargs 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@64 -- # sort 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@64 -- # xargs 00:23:10.148 [2024-12-13 13:07:50.754196] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 [2024-12-13 13:07:50.805464] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 [2024-12-13 13:07:50.845433] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:10.148 13:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.148 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.148 [2024-12-13 13:07:50.853441] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:10.148 13:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98075 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:10.148 13:07:50 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:11.084 [2024-12-13 13:07:51.654195] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:11.084 Established under name 'CDC' 00:23:11.342 [2024-12-13 13:07:52.054219] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:11.342 [2024-12-13 13:07:52.054411] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:11.342 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:11.342 cookie is 0 00:23:11.342 is_local: 1 00:23:11.342 our_own: 0 00:23:11.342 wide_area: 0 00:23:11.342 multicast: 1 00:23:11.342 cached: 1 00:23:11.601 [2024-12-13 13:07:52.154203] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:11.601 [2024-12-13 13:07:52.154221] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:11.601 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:11.601 cookie is 0 00:23:11.601 is_local: 1 00:23:11.601 our_own: 0 00:23:11.601 wide_area: 0 00:23:11.601 multicast: 1 00:23:11.601 cached: 1 00:23:12.537 [2024-12-13 13:07:53.058813] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:12.537 [2024-12-13 13:07:53.058836] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:12.537 [2024-12-13 13:07:53.058852] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:12.537 [2024-12-13 13:07:53.144919] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:12.537 [2024-12-13 13:07:53.158560] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:12.537 [2024-12-13 13:07:53.158726] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:12.537 [2024-12-13 13:07:53.158819] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:12.537 [2024-12-13 13:07:53.203261] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:12.537 [2024-12-13 13:07:53.203483] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:12.537 [2024-12-13 13:07:53.246650] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:12.537 [2024-12-13 13:07:53.308819] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:12.537 [2024-12-13 13:07:53.308857] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:15.828 13:07:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.828 13:07:55 -- common/autotest_common.sh@10 -- # set +x 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@80 -- # sort 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@80 -- # xargs 00:23:15.828 13:07:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:15.828 13:07:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.828 13:07:55 -- common/autotest_common.sh@10 -- # set +x 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@76 -- # sort 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@76 -- # xargs 00:23:15.828 13:07:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:15.828 13:07:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@68 -- # sort 00:23:15.828 13:07:55 -- common/autotest_common.sh@10 -- # set +x 00:23:15.828 13:07:55 -- host/mdns_discovery.sh@68 -- # xargs 00:23:15.828 13:07:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.828 13:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:15.828 13:07:56 -- common/autotest_common.sh@10 -- # set +x 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@64 -- # sort 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@64 -- # xargs 00:23:15.828 13:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:15.828 13:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.828 13:07:56 -- common/autotest_common.sh@10 -- # set +x 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@72 -- # xargs 00:23:15.828 13:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:15.828 13:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.828 13:07:56 -- common/autotest_common.sh@10 -- # set +x 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@72 -- # xargs 00:23:15.828 13:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:15.828 13:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.828 13:07:56 -- common/autotest_common.sh@10 -- # set +x 00:23:15.828 13:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:15.828 13:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.828 13:07:56 -- common/autotest_common.sh@10 -- # set +x 00:23:15.828 13:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:15.828 13:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.828 13:07:56 -- common/autotest_common.sh@10 -- # set +x 00:23:15.828 13:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.828 13:07:56 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.765 13:07:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:16.765 13:07:57 -- common/autotest_common.sh@10 -- # set +x 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@64 -- # sort 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@64 -- # xargs 00:23:16.765 13:07:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:16.765 13:07:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.765 13:07:57 -- common/autotest_common.sh@10 -- # set +x 00:23:16.765 13:07:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:16.765 13:07:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.765 13:07:57 -- common/autotest_common.sh@10 -- # set +x 00:23:16.765 [2024-12-13 13:07:57.380615] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.765 [2024-12-13 13:07:57.381020] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:16.765 [2024-12-13 13:07:57.381060] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:16.765 [2024-12-13 13:07:57.381096] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:16.765 [2024-12-13 13:07:57.381110] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:16.765 13:07:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:16.765 13:07:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.765 13:07:57 -- common/autotest_common.sh@10 -- # set +x 00:23:16.765 [2024-12-13 13:07:57.388583] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:16.765 [2024-12-13 13:07:57.389027] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:16.765 [2024-12-13 13:07:57.389087] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:16.765 13:07:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.765 13:07:57 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:16.765 [2024-12-13 13:07:57.520117] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:16.765 [2024-12-13 13:07:57.520295] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:17.024 [2024-12-13 13:07:57.581533] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:17.024 [2024-12-13 13:07:57.581557] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:17.024 [2024-12-13 13:07:57.581579] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:17.024 [2024-12-13 13:07:57.581595] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:17.024 [2024-12-13 13:07:57.582350] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:17.024 [2024-12-13 13:07:57.582370] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:17.024 [2024-12-13 13:07:57.582392] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:17.024 [2024-12-13 13:07:57.582435] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.024 [2024-12-13 13:07:57.627276] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:17.024 [2024-12-13 13:07:57.627298] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:17.024 [2024-12-13 13:07:57.628261] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:17.024 [2024-12-13 13:07:57.628276] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:17.961 13:07:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.961 13:07:58 -- common/autotest_common.sh@10 -- # set +x 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@68 -- # sort 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@68 -- # xargs 00:23:17.961 13:07:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.961 13:07:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.961 13:07:58 -- common/autotest_common.sh@10 -- # set +x 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@64 -- # xargs 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@64 -- # sort 00:23:17.961 13:07:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:17.961 13:07:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@72 -- # xargs 00:23:17.961 13:07:58 -- common/autotest_common.sh@10 -- # set +x 00:23:17.961 13:07:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:17.961 13:07:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.961 13:07:58 -- common/autotest_common.sh@10 -- # set +x 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@72 -- # xargs 00:23:17.961 13:07:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:17.961 13:07:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.961 13:07:58 -- common/autotest_common.sh@10 -- # set +x 00:23:17.961 13:07:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.961 13:07:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.961 13:07:58 -- common/autotest_common.sh@10 -- # set +x 00:23:17.961 [2024-12-13 13:07:58.685969] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:17.961 [2024-12-13 13:07:58.686004] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.961 [2024-12-13 13:07:58.686036] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:17.961 [2024-12-13 13:07:58.686048] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:17.961 [2024-12-13 13:07:58.687850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.961 [2024-12-13 13:07:58.687893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.961 [2024-12-13 13:07:58.687923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.961 [2024-12-13 13:07:58.687931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.961 [2024-12-13 13:07:58.687940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.961 [2024-12-13 13:07:58.687949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.961 [2024-12-13 13:07:58.687972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.961 [2024-12-13 13:07:58.687996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.961 [2024-12-13 13:07:58.688020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:17.961 13:07:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:17.961 13:07:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.961 13:07:58 -- common/autotest_common.sh@10 -- # set +x 00:23:17.961 [2024-12-13 13:07:58.694090] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:17.961 [2024-12-13 13:07:58.694201] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:17.961 [2024-12-13 13:07:58.697816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:17.961 13:07:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.961 13:07:58 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:17.962 [2024-12-13 13:07:58.702003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.962 [2024-12-13 13:07:58.702032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.962 [2024-12-13 13:07:58.702043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.962 [2024-12-13 13:07:58.702051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.962 [2024-12-13 13:07:58.702060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.962 [2024-12-13 13:07:58.702068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.962 [2024-12-13 13:07:58.702077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.962 [2024-12-13 13:07:58.702085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.962 [2024-12-13 13:07:58.702093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:17.962 [2024-12-13 13:07:58.707840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:17.962 [2024-12-13 13:07:58.707950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.707996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.708011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:17.962 [2024-12-13 13:07:58.708020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:17.962 [2024-12-13 13:07:58.708036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:17.962 [2024-12-13 13:07:58.708049] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:17.962 [2024-12-13 13:07:58.708057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:17.962 [2024-12-13 13:07:58.708065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:17.962 [2024-12-13 13:07:58.708095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:17.962 [2024-12-13 13:07:58.711968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:17.962 [2024-12-13 13:07:58.717905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:17.962 [2024-12-13 13:07:58.717979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.718022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.718035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:17.962 [2024-12-13 13:07:58.718044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:17.962 [2024-12-13 13:07:58.718058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:17.962 [2024-12-13 13:07:58.718070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:17.962 [2024-12-13 13:07:58.718077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:17.962 [2024-12-13 13:07:58.718100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:17.962 [2024-12-13 13:07:58.718129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:17.962 [2024-12-13 13:07:58.721992] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:17.962 [2024-12-13 13:07:58.722067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.722110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.722124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:17.962 [2024-12-13 13:07:58.722133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:17.962 [2024-12-13 13:07:58.722147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:17.962 [2024-12-13 13:07:58.722159] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:17.962 [2024-12-13 13:07:58.722166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:17.962 [2024-12-13 13:07:58.722174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:17.962 [2024-12-13 13:07:58.722187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:17.962 [2024-12-13 13:07:58.727948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:17.962 [2024-12-13 13:07:58.728034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.728076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.728089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:17.962 [2024-12-13 13:07:58.728098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:17.962 [2024-12-13 13:07:58.728111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:17.962 [2024-12-13 13:07:58.728123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:17.962 [2024-12-13 13:07:58.728130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:17.962 [2024-12-13 13:07:58.728137] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:17.962 [2024-12-13 13:07:58.728149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:17.962 [2024-12-13 13:07:58.732037] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:17.962 [2024-12-13 13:07:58.732140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.732182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.962 [2024-12-13 13:07:58.732196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:17.962 [2024-12-13 13:07:58.732204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:17.962 [2024-12-13 13:07:58.732218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:17.962 [2024-12-13 13:07:58.732246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:17.962 [2024-12-13 13:07:58.732256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:17.962 [2024-12-13 13:07:58.732263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:17.962 [2024-12-13 13:07:58.732276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.222 [2024-12-13 13:07:58.738008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.222 [2024-12-13 13:07:58.738095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.222 [2024-12-13 13:07:58.738152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.222 [2024-12-13 13:07:58.738165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:18.222 [2024-12-13 13:07:58.738174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:18.222 [2024-12-13 13:07:58.738218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:18.222 [2024-12-13 13:07:58.738230] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.222 [2024-12-13 13:07:58.738238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.222 [2024-12-13 13:07:58.738245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.222 [2024-12-13 13:07:58.738322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.222 [2024-12-13 13:07:58.742099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:18.223 [2024-12-13 13:07:58.742224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.742268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.742283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:18.223 [2024-12-13 13:07:58.742291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.742306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:18.223 [2024-12-13 13:07:58.742335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:18.223 [2024-12-13 13:07:58.742344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:18.223 [2024-12-13 13:07:58.742352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:18.223 [2024-12-13 13:07:58.742380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.223 [2024-12-13 13:07:58.748067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.223 [2024-12-13 13:07:58.748156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.748198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.748212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:18.223 [2024-12-13 13:07:58.748221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.748234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:18.223 [2024-12-13 13:07:58.748246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.223 [2024-12-13 13:07:58.748253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.223 [2024-12-13 13:07:58.748261] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.223 [2024-12-13 13:07:58.748273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.223 [2024-12-13 13:07:58.752191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:18.223 [2024-12-13 13:07:58.752277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.752318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.752332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:18.223 [2024-12-13 13:07:58.752340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.752354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:18.223 [2024-12-13 13:07:58.752382] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:18.223 [2024-12-13 13:07:58.752392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:18.223 [2024-12-13 13:07:58.752399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:18.223 [2024-12-13 13:07:58.752412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.223 [2024-12-13 13:07:58.758127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.223 [2024-12-13 13:07:58.758211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.758252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.758265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:18.223 [2024-12-13 13:07:58.758273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.758287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:18.223 [2024-12-13 13:07:58.758299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.223 [2024-12-13 13:07:58.758306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.223 [2024-12-13 13:07:58.758313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.223 [2024-12-13 13:07:58.758341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.223 [2024-12-13 13:07:58.762250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:18.223 [2024-12-13 13:07:58.762336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.762378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.762393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:18.223 [2024-12-13 13:07:58.762402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.762416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:18.223 [2024-12-13 13:07:58.762450] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:18.223 [2024-12-13 13:07:58.762460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:18.223 [2024-12-13 13:07:58.762468] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:18.223 [2024-12-13 13:07:58.762480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.223 [2024-12-13 13:07:58.768184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.223 [2024-12-13 13:07:58.768268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.768309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.768323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:18.223 [2024-12-13 13:07:58.768331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.768345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:18.223 [2024-12-13 13:07:58.768356] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.223 [2024-12-13 13:07:58.768363] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.223 [2024-12-13 13:07:58.768371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.223 [2024-12-13 13:07:58.768383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.223 [2024-12-13 13:07:58.772309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:18.223 [2024-12-13 13:07:58.772394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.772435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.772449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:18.223 [2024-12-13 13:07:58.772457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.772471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:18.223 [2024-12-13 13:07:58.772504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:18.223 [2024-12-13 13:07:58.772513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:18.223 [2024-12-13 13:07:58.772521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:18.223 [2024-12-13 13:07:58.772534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.223 [2024-12-13 13:07:58.778241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.223 [2024-12-13 13:07:58.778326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.778368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.778381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:18.223 [2024-12-13 13:07:58.778390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.778403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:18.223 [2024-12-13 13:07:58.778415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.223 [2024-12-13 13:07:58.778422] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.223 [2024-12-13 13:07:58.778430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.223 [2024-12-13 13:07:58.778442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.223 [2024-12-13 13:07:58.782368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:18.223 [2024-12-13 13:07:58.782463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.782505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.782520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:18.223 [2024-12-13 13:07:58.782528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.782542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:18.223 [2024-12-13 13:07:58.782602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:18.223 [2024-12-13 13:07:58.782614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:18.223 [2024-12-13 13:07:58.782623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:18.223 [2024-12-13 13:07:58.782635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.223 [2024-12-13 13:07:58.788301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.223 [2024-12-13 13:07:58.788393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.788436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.223 [2024-12-13 13:07:58.788450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:18.223 [2024-12-13 13:07:58.788458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:18.223 [2024-12-13 13:07:58.788472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:18.224 [2024-12-13 13:07:58.788484] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.224 [2024-12-13 13:07:58.788490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.224 [2024-12-13 13:07:58.788498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.224 [2024-12-13 13:07:58.788511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.224 [2024-12-13 13:07:58.792433] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:18.224 [2024-12-13 13:07:58.792520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.792562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.792577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:18.224 [2024-12-13 13:07:58.792586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:18.224 [2024-12-13 13:07:58.792600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:18.224 [2024-12-13 13:07:58.792635] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:18.224 [2024-12-13 13:07:58.792645] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:18.224 [2024-12-13 13:07:58.792653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:18.224 [2024-12-13 13:07:58.792665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.224 [2024-12-13 13:07:58.798363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.224 [2024-12-13 13:07:58.798450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.798492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.798506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:18.224 [2024-12-13 13:07:58.798515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:18.224 [2024-12-13 13:07:58.798528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:18.224 [2024-12-13 13:07:58.798540] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.224 [2024-12-13 13:07:58.798547] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.224 [2024-12-13 13:07:58.798555] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.224 [2024-12-13 13:07:58.798567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.224 [2024-12-13 13:07:58.802493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:18.224 [2024-12-13 13:07:58.802579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.802622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.802636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:18.224 [2024-12-13 13:07:58.802645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:18.224 [2024-12-13 13:07:58.802660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:18.224 [2024-12-13 13:07:58.802696] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:18.224 [2024-12-13 13:07:58.802707] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:18.224 [2024-12-13 13:07:58.802715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:18.224 [2024-12-13 13:07:58.802727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.224 [2024-12-13 13:07:58.808422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.224 [2024-12-13 13:07:58.808513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.808555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.808569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:18.224 [2024-12-13 13:07:58.808577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:18.224 [2024-12-13 13:07:58.808590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:18.224 [2024-12-13 13:07:58.808602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.224 [2024-12-13 13:07:58.808609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.224 [2024-12-13 13:07:58.808616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.224 [2024-12-13 13:07:58.808629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.224 [2024-12-13 13:07:58.812550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:18.224 [2024-12-13 13:07:58.812636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.812678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.812692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:18.224 [2024-12-13 13:07:58.812701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:18.224 [2024-12-13 13:07:58.812715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:18.224 [2024-12-13 13:07:58.812750] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:18.224 [2024-12-13 13:07:58.812788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:18.224 [2024-12-13 13:07:58.812798] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:18.224 [2024-12-13 13:07:58.812812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.224 [2024-12-13 13:07:58.818487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.224 [2024-12-13 13:07:58.818572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.818613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.818626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d48d00 with addr=10.0.0.2, port=4420 00:23:18.224 [2024-12-13 13:07:58.818635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d00 is same with the state(5) to be set 00:23:18.224 [2024-12-13 13:07:58.818648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d48d00 (9): Bad file descriptor 00:23:18.224 [2024-12-13 13:07:58.818661] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:18.224 [2024-12-13 13:07:58.818668] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:18.224 [2024-12-13 13:07:58.818675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:18.224 [2024-12-13 13:07:58.818687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.224 [2024-12-13 13:07:58.822607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:18.224 [2024-12-13 13:07:58.822692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.822749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.224 [2024-12-13 13:07:58.822763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf6070 with addr=10.0.0.3, port=4420 00:23:18.224 [2024-12-13 13:07:58.822797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf6070 is same with the state(5) to be set 00:23:18.224 [2024-12-13 13:07:58.822814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf6070 (9): Bad file descriptor 00:23:18.224 [2024-12-13 13:07:58.822850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:18.224 [2024-12-13 13:07:58.822860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:18.224 [2024-12-13 13:07:58.822867] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:18.224 [2024-12-13 13:07:58.822880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.224 [2024-12-13 13:07:58.824851] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:18.224 [2024-12-13 13:07:58.824893] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:18.224 [2024-12-13 13:07:58.824911] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:18.224 [2024-12-13 13:07:58.825856] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:18.224 [2024-12-13 13:07:58.825895] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:18.224 [2024-12-13 13:07:58.825911] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:18.224 [2024-12-13 13:07:58.910913] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:18.224 [2024-12-13 13:07:58.912908] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:19.161 13:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@68 -- # sort 00:23:19.161 13:07:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@68 -- # xargs 00:23:19.161 13:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.161 13:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.161 13:07:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@64 -- # sort 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@64 -- # xargs 00:23:19.161 13:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:19.161 13:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.161 13:07:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@72 -- # xargs 00:23:19.161 13:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:19.161 13:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.161 13:07:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@72 -- # xargs 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:19.161 13:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:19.161 13:07:59 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:19.161 13:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.161 13:07:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.161 13:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.420 13:07:59 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:19.420 13:07:59 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:19.420 13:07:59 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:19.420 13:07:59 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:19.420 13:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.420 13:07:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.420 13:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.420 13:07:59 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:19.420 [2024-12-13 13:08:00.054204] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:20.355 13:08:00 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:20.355 13:08:00 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:20.355 13:08:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.355 13:08:00 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:20.355 13:08:00 -- common/autotest_common.sh@10 -- # set +x 00:23:20.355 13:08:00 -- host/mdns_discovery.sh@80 -- # sort 00:23:20.355 13:08:00 -- host/mdns_discovery.sh@80 -- # xargs 00:23:20.355 13:08:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@68 -- # xargs 00:23:20.355 13:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@68 -- # sort 00:23:20.355 13:08:01 -- common/autotest_common.sh@10 -- # set +x 00:23:20.355 13:08:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:20.355 13:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.355 13:08:01 -- common/autotest_common.sh@10 -- # set +x 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@64 -- # xargs 00:23:20.355 13:08:01 -- host/mdns_discovery.sh@64 -- # sort 00:23:20.355 13:08:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.613 13:08:01 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:20.613 13:08:01 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:20.613 13:08:01 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:20.613 13:08:01 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:20.613 13:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.613 13:08:01 -- common/autotest_common.sh@10 -- # set +x 00:23:20.613 13:08:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.613 13:08:01 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:20.613 13:08:01 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:20.613 13:08:01 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:20.613 13:08:01 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:20.613 13:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.613 13:08:01 -- common/autotest_common.sh@10 -- # set +x 00:23:20.613 13:08:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.613 13:08:01 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:20.613 13:08:01 -- common/autotest_common.sh@650 -- # local es=0 00:23:20.613 13:08:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:20.613 13:08:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:20.613 13:08:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.613 13:08:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:20.613 13:08:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.613 13:08:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:20.613 13:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.613 13:08:01 -- common/autotest_common.sh@10 -- # set +x 00:23:20.613 [2024-12-13 13:08:01.222283] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:20.614 2024/12/13 13:08:01 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:20.614 request: 00:23:20.614 { 00:23:20.614 "method": "bdev_nvme_start_mdns_discovery", 00:23:20.614 "params": { 00:23:20.614 "name": "mdns", 00:23:20.614 "svcname": "_nvme-disc._http", 00:23:20.614 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:20.614 } 00:23:20.614 } 00:23:20.614 Got JSON-RPC error response 00:23:20.614 GoRPCClient: error on JSON-RPC call 00:23:20.614 13:08:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:20.614 13:08:01 -- common/autotest_common.sh@653 -- # es=1 00:23:20.614 13:08:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.614 13:08:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.614 13:08:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.614 13:08:01 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:20.872 [2024-12-13 13:08:01.610712] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:21.130 [2024-12-13 13:08:01.710708] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:21.130 [2024-12-13 13:08:01.810712] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:21.130 [2024-12-13 13:08:01.810729] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:21.130 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:21.130 cookie is 0 00:23:21.130 is_local: 1 00:23:21.130 our_own: 0 00:23:21.130 wide_area: 0 00:23:21.130 multicast: 1 00:23:21.130 cached: 1 00:23:21.389 [2024-12-13 13:08:01.910714] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:21.389 [2024-12-13 13:08:01.910735] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:21.389 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:21.389 cookie is 0 00:23:21.389 is_local: 1 00:23:21.389 our_own: 0 00:23:21.389 wide_area: 0 00:23:21.389 multicast: 1 00:23:21.389 cached: 1 00:23:22.326 [2024-12-13 13:08:02.820890] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:22.326 [2024-12-13 13:08:02.820914] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:22.326 [2024-12-13 13:08:02.820946] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:22.326 [2024-12-13 13:08:02.907006] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:22.326 [2024-12-13 13:08:02.920699] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:22.326 [2024-12-13 13:08:02.920720] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:22.326 [2024-12-13 13:08:02.920750] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:22.326 [2024-12-13 13:08:02.973673] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:22.326 [2024-12-13 13:08:02.973701] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:22.326 [2024-12-13 13:08:03.008816] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:22.326 [2024-12-13 13:08:03.074428] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:22.326 [2024-12-13 13:08:03.074463] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:25.613 13:08:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:25.613 13:08:06 -- common/autotest_common.sh@10 -- # set +x 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@80 -- # sort 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@80 -- # xargs 00:23:25.613 13:08:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:25.613 13:08:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.613 13:08:06 -- common/autotest_common.sh@10 -- # set +x 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@76 -- # sort 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@76 -- # xargs 00:23:25.613 13:08:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.613 13:08:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.613 13:08:06 -- common/autotest_common.sh@10 -- # set +x 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@64 -- # sort 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@64 -- # xargs 00:23:25.613 13:08:06 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:25.872 13:08:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:25.872 13:08:06 -- common/autotest_common.sh@650 -- # local es=0 00:23:25.872 13:08:06 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:25.872 13:08:06 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.872 13:08:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.872 13:08:06 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.872 13:08:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.872 13:08:06 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:25.872 13:08:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.872 13:08:06 -- common/autotest_common.sh@10 -- # set +x 00:23:25.872 [2024-12-13 13:08:06.417626] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:25.872 2024/12/13 13:08:06 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:25.872 request: 00:23:25.872 { 00:23:25.872 "method": "bdev_nvme_start_mdns_discovery", 00:23:25.872 "params": { 00:23:25.872 "name": "cdc", 00:23:25.872 "svcname": "_nvme-disc._tcp", 00:23:25.872 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:25.872 } 00:23:25.872 } 00:23:25.872 Got JSON-RPC error response 00:23:25.872 GoRPCClient: error on JSON-RPC call 00:23:25.872 13:08:06 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.872 13:08:06 -- common/autotest_common.sh@653 -- # es=1 00:23:25.872 13:08:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.872 13:08:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.872 13:08:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:25.872 13:08:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.872 13:08:06 -- common/autotest_common.sh@10 -- # set +x 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@76 -- # sort 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@76 -- # xargs 00:23:25.872 13:08:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.872 13:08:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.872 13:08:06 -- common/autotest_common.sh@10 -- # set +x 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@64 -- # sort 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@64 -- # xargs 00:23:25.872 13:08:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:25.872 13:08:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.872 13:08:06 -- common/autotest_common.sh@10 -- # set +x 00:23:25.872 13:08:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@197 -- # kill 97994 00:23:25.872 13:08:06 -- host/mdns_discovery.sh@200 -- # wait 97994 00:23:25.873 [2024-12-13 13:08:06.647333] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:26.132 13:08:06 -- host/mdns_discovery.sh@201 -- # kill 98075 00:23:26.132 Got SIGTERM, quitting. 00:23:26.132 13:08:06 -- host/mdns_discovery.sh@202 -- # kill 98024 00:23:26.132 13:08:06 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:26.132 13:08:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:26.132 13:08:06 -- nvmf/common.sh@116 -- # sync 00:23:26.132 Got SIGTERM, quitting. 00:23:26.132 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:26.132 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:26.132 avahi-daemon 0.8 exiting. 00:23:26.132 13:08:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:26.132 13:08:06 -- nvmf/common.sh@119 -- # set +e 00:23:26.132 13:08:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:26.132 13:08:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:26.132 rmmod nvme_tcp 00:23:26.132 rmmod nvme_fabrics 00:23:26.132 rmmod nvme_keyring 00:23:26.132 13:08:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:26.132 13:08:06 -- nvmf/common.sh@123 -- # set -e 00:23:26.132 13:08:06 -- nvmf/common.sh@124 -- # return 0 00:23:26.132 13:08:06 -- nvmf/common.sh@477 -- # '[' -n 97952 ']' 00:23:26.132 13:08:06 -- nvmf/common.sh@478 -- # killprocess 97952 00:23:26.132 13:08:06 -- common/autotest_common.sh@936 -- # '[' -z 97952 ']' 00:23:26.132 13:08:06 -- common/autotest_common.sh@940 -- # kill -0 97952 00:23:26.132 13:08:06 -- common/autotest_common.sh@941 -- # uname 00:23:26.132 13:08:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:26.132 13:08:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97952 00:23:26.132 13:08:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:26.132 13:08:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:26.132 killing process with pid 97952 00:23:26.132 13:08:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97952' 00:23:26.132 13:08:06 -- common/autotest_common.sh@955 -- # kill 97952 00:23:26.132 13:08:06 -- common/autotest_common.sh@960 -- # wait 97952 00:23:26.391 13:08:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:26.391 13:08:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:26.391 13:08:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:26.391 13:08:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:26.391 13:08:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:26.391 13:08:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.391 13:08:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.391 13:08:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.391 13:08:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:26.391 00:23:26.391 real 0m19.941s 00:23:26.391 user 0m39.626s 00:23:26.391 sys 0m1.910s 00:23:26.391 13:08:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:26.391 13:08:07 -- common/autotest_common.sh@10 -- # set +x 00:23:26.391 ************************************ 00:23:26.391 END TEST nvmf_mdns_discovery 00:23:26.391 ************************************ 00:23:26.391 13:08:07 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:26.391 13:08:07 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:26.391 13:08:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:26.391 13:08:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:26.391 13:08:07 -- common/autotest_common.sh@10 -- # set +x 00:23:26.651 ************************************ 00:23:26.651 START TEST nvmf_multipath 00:23:26.651 ************************************ 00:23:26.651 13:08:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:26.651 * Looking for test storage... 00:23:26.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:26.651 13:08:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:26.651 13:08:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:26.651 13:08:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:26.651 13:08:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:26.651 13:08:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:26.651 13:08:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:26.651 13:08:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:26.651 13:08:07 -- scripts/common.sh@335 -- # IFS=.-: 00:23:26.651 13:08:07 -- scripts/common.sh@335 -- # read -ra ver1 00:23:26.651 13:08:07 -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.651 13:08:07 -- scripts/common.sh@336 -- # read -ra ver2 00:23:26.651 13:08:07 -- scripts/common.sh@337 -- # local 'op=<' 00:23:26.651 13:08:07 -- scripts/common.sh@339 -- # ver1_l=2 00:23:26.651 13:08:07 -- scripts/common.sh@340 -- # ver2_l=1 00:23:26.651 13:08:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:26.651 13:08:07 -- scripts/common.sh@343 -- # case "$op" in 00:23:26.651 13:08:07 -- scripts/common.sh@344 -- # : 1 00:23:26.651 13:08:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:26.651 13:08:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.651 13:08:07 -- scripts/common.sh@364 -- # decimal 1 00:23:26.651 13:08:07 -- scripts/common.sh@352 -- # local d=1 00:23:26.651 13:08:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.651 13:08:07 -- scripts/common.sh@354 -- # echo 1 00:23:26.651 13:08:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:26.651 13:08:07 -- scripts/common.sh@365 -- # decimal 2 00:23:26.651 13:08:07 -- scripts/common.sh@352 -- # local d=2 00:23:26.651 13:08:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.651 13:08:07 -- scripts/common.sh@354 -- # echo 2 00:23:26.651 13:08:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:26.651 13:08:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:26.651 13:08:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:26.651 13:08:07 -- scripts/common.sh@367 -- # return 0 00:23:26.651 13:08:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.651 13:08:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.651 --rc genhtml_branch_coverage=1 00:23:26.651 --rc genhtml_function_coverage=1 00:23:26.651 --rc genhtml_legend=1 00:23:26.651 --rc geninfo_all_blocks=1 00:23:26.651 --rc geninfo_unexecuted_blocks=1 00:23:26.651 00:23:26.651 ' 00:23:26.651 13:08:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.651 --rc genhtml_branch_coverage=1 00:23:26.651 --rc genhtml_function_coverage=1 00:23:26.651 --rc genhtml_legend=1 00:23:26.651 --rc geninfo_all_blocks=1 00:23:26.651 --rc geninfo_unexecuted_blocks=1 00:23:26.651 00:23:26.651 ' 00:23:26.651 13:08:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.651 --rc genhtml_branch_coverage=1 00:23:26.651 --rc genhtml_function_coverage=1 00:23:26.651 --rc genhtml_legend=1 00:23:26.651 --rc geninfo_all_blocks=1 00:23:26.651 --rc geninfo_unexecuted_blocks=1 00:23:26.651 00:23:26.651 ' 00:23:26.651 13:08:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.651 --rc genhtml_branch_coverage=1 00:23:26.651 --rc genhtml_function_coverage=1 00:23:26.651 --rc genhtml_legend=1 00:23:26.651 --rc geninfo_all_blocks=1 00:23:26.651 --rc geninfo_unexecuted_blocks=1 00:23:26.651 00:23:26.651 ' 00:23:26.651 13:08:07 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:26.651 13:08:07 -- nvmf/common.sh@7 -- # uname -s 00:23:26.651 13:08:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.651 13:08:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.651 13:08:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.651 13:08:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.651 13:08:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.651 13:08:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.651 13:08:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.651 13:08:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.651 13:08:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.651 13:08:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.651 13:08:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:23:26.651 13:08:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:23:26.651 13:08:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.651 13:08:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.651 13:08:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:26.651 13:08:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:26.651 13:08:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.651 13:08:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.651 13:08:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.651 13:08:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.651 13:08:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.651 13:08:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.651 13:08:07 -- paths/export.sh@5 -- # export PATH 00:23:26.651 13:08:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.651 13:08:07 -- nvmf/common.sh@46 -- # : 0 00:23:26.651 13:08:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:26.651 13:08:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:26.651 13:08:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:26.651 13:08:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.651 13:08:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.651 13:08:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:26.651 13:08:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:26.651 13:08:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:26.651 13:08:07 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.651 13:08:07 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.651 13:08:07 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:26.651 13:08:07 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:26.651 13:08:07 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.651 13:08:07 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:26.651 13:08:07 -- host/multipath.sh@30 -- # nvmftestinit 00:23:26.651 13:08:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:26.651 13:08:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.651 13:08:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:26.651 13:08:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:26.651 13:08:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:26.651 13:08:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.651 13:08:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.651 13:08:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.651 13:08:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:26.651 13:08:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:26.651 13:08:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:26.651 13:08:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:26.651 13:08:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:26.651 13:08:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:26.651 13:08:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.651 13:08:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.651 13:08:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:26.651 13:08:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:26.651 13:08:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:26.651 13:08:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:26.651 13:08:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:26.651 13:08:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.652 13:08:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:26.652 13:08:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:26.652 13:08:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:26.652 13:08:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:26.652 13:08:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:26.652 13:08:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:26.652 Cannot find device "nvmf_tgt_br" 00:23:26.652 13:08:07 -- nvmf/common.sh@154 -- # true 00:23:26.652 13:08:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.911 Cannot find device "nvmf_tgt_br2" 00:23:26.911 13:08:07 -- nvmf/common.sh@155 -- # true 00:23:26.911 13:08:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:26.911 13:08:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:26.911 Cannot find device "nvmf_tgt_br" 00:23:26.911 13:08:07 -- nvmf/common.sh@157 -- # true 00:23:26.911 13:08:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:26.911 Cannot find device "nvmf_tgt_br2" 00:23:26.911 13:08:07 -- nvmf/common.sh@158 -- # true 00:23:26.911 13:08:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:26.911 13:08:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:26.911 13:08:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.911 13:08:07 -- nvmf/common.sh@161 -- # true 00:23:26.911 13:08:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.911 13:08:07 -- nvmf/common.sh@162 -- # true 00:23:26.911 13:08:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:26.911 13:08:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:26.911 13:08:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:26.911 13:08:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:26.911 13:08:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:26.911 13:08:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:26.911 13:08:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:26.911 13:08:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:26.911 13:08:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:26.911 13:08:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:26.911 13:08:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:26.911 13:08:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:26.911 13:08:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:26.911 13:08:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:26.911 13:08:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:26.911 13:08:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:26.911 13:08:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:26.911 13:08:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:26.911 13:08:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:26.911 13:08:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:26.911 13:08:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:26.911 13:08:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:27.172 13:08:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:27.172 13:08:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:27.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:23:27.172 00:23:27.172 --- 10.0.0.2 ping statistics --- 00:23:27.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.172 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:23:27.172 13:08:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:27.172 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:27.172 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:23:27.172 00:23:27.172 --- 10.0.0.3 ping statistics --- 00:23:27.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.172 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:27.172 13:08:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:27.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:27.172 00:23:27.172 --- 10.0.0.1 ping statistics --- 00:23:27.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.172 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:27.172 13:08:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.172 13:08:07 -- nvmf/common.sh@421 -- # return 0 00:23:27.172 13:08:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:27.172 13:08:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.172 13:08:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:27.172 13:08:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:27.172 13:08:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.172 13:08:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:27.172 13:08:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:27.172 13:08:07 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:27.172 13:08:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:27.172 13:08:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:27.172 13:08:07 -- common/autotest_common.sh@10 -- # set +x 00:23:27.172 13:08:07 -- nvmf/common.sh@469 -- # nvmfpid=98596 00:23:27.172 13:08:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:27.172 13:08:07 -- nvmf/common.sh@470 -- # waitforlisten 98596 00:23:27.172 13:08:07 -- common/autotest_common.sh@829 -- # '[' -z 98596 ']' 00:23:27.172 13:08:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.172 13:08:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.172 13:08:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.172 13:08:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.172 13:08:07 -- common/autotest_common.sh@10 -- # set +x 00:23:27.172 [2024-12-13 13:08:07.785030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:27.172 [2024-12-13 13:08:07.785120] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.172 [2024-12-13 13:08:07.926892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:27.438 [2024-12-13 13:08:08.000644] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:27.438 [2024-12-13 13:08:08.000841] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.438 [2024-12-13 13:08:08.000858] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.438 [2024-12-13 13:08:08.000870] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.438 [2024-12-13 13:08:08.001031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.438 [2024-12-13 13:08:08.001046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.374 13:08:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.374 13:08:08 -- common/autotest_common.sh@862 -- # return 0 00:23:28.374 13:08:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:28.374 13:08:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.374 13:08:08 -- common/autotest_common.sh@10 -- # set +x 00:23:28.374 13:08:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.374 13:08:08 -- host/multipath.sh@33 -- # nvmfapp_pid=98596 00:23:28.374 13:08:08 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:28.374 [2024-12-13 13:08:09.116136] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.374 13:08:09 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:28.941 Malloc0 00:23:28.941 13:08:09 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:29.200 13:08:09 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.200 13:08:09 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.458 [2024-12-13 13:08:10.178722] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.458 13:08:10 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:29.717 [2024-12-13 13:08:10.430866] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:29.717 13:08:10 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:29.717 13:08:10 -- host/multipath.sh@44 -- # bdevperf_pid=98694 00:23:29.717 13:08:10 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.717 13:08:10 -- host/multipath.sh@47 -- # waitforlisten 98694 /var/tmp/bdevperf.sock 00:23:29.717 13:08:10 -- common/autotest_common.sh@829 -- # '[' -z 98694 ']' 00:23:29.717 13:08:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.717 13:08:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.717 13:08:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.717 13:08:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.717 13:08:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.653 13:08:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.653 13:08:11 -- common/autotest_common.sh@862 -- # return 0 00:23:30.654 13:08:11 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:30.912 13:08:11 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:31.479 Nvme0n1 00:23:31.479 13:08:12 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:31.738 Nvme0n1 00:23:31.738 13:08:12 -- host/multipath.sh@78 -- # sleep 1 00:23:31.738 13:08:12 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:32.674 13:08:13 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:32.674 13:08:13 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:33.242 13:08:13 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:33.242 13:08:14 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:33.242 13:08:14 -- host/multipath.sh@65 -- # dtrace_pid=98787 00:23:33.242 13:08:14 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98596 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:33.242 13:08:14 -- host/multipath.sh@66 -- # sleep 6 00:23:39.810 13:08:20 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:39.810 13:08:20 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:39.810 13:08:20 -- host/multipath.sh@67 -- # active_port=4421 00:23:39.810 13:08:20 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:39.810 Attaching 4 probes... 00:23:39.810 @path[10.0.0.2, 4421]: 21336 00:23:39.810 @path[10.0.0.2, 4421]: 21810 00:23:39.810 @path[10.0.0.2, 4421]: 21813 00:23:39.810 @path[10.0.0.2, 4421]: 21772 00:23:39.810 @path[10.0.0.2, 4421]: 21749 00:23:39.810 13:08:20 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:39.810 13:08:20 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:39.811 13:08:20 -- host/multipath.sh@69 -- # sed -n 1p 00:23:39.811 13:08:20 -- host/multipath.sh@69 -- # port=4421 00:23:39.811 13:08:20 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:39.811 13:08:20 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:39.811 13:08:20 -- host/multipath.sh@72 -- # kill 98787 00:23:39.811 13:08:20 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:39.811 13:08:20 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:39.811 13:08:20 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:39.811 13:08:20 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:40.070 13:08:20 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:40.070 13:08:20 -- host/multipath.sh@65 -- # dtrace_pid=98918 00:23:40.070 13:08:20 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98596 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:40.070 13:08:20 -- host/multipath.sh@66 -- # sleep 6 00:23:46.665 13:08:26 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:46.665 13:08:26 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:46.665 13:08:27 -- host/multipath.sh@67 -- # active_port=4420 00:23:46.665 13:08:27 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.665 Attaching 4 probes... 00:23:46.665 @path[10.0.0.2, 4420]: 21168 00:23:46.665 @path[10.0.0.2, 4420]: 21705 00:23:46.665 @path[10.0.0.2, 4420]: 22014 00:23:46.665 @path[10.0.0.2, 4420]: 21883 00:23:46.665 @path[10.0.0.2, 4420]: 21909 00:23:46.665 13:08:27 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:46.665 13:08:27 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:46.665 13:08:27 -- host/multipath.sh@69 -- # sed -n 1p 00:23:46.665 13:08:27 -- host/multipath.sh@69 -- # port=4420 00:23:46.665 13:08:27 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:46.665 13:08:27 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:46.665 13:08:27 -- host/multipath.sh@72 -- # kill 98918 00:23:46.665 13:08:27 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.665 13:08:27 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:46.665 13:08:27 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:46.665 13:08:27 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:46.924 13:08:27 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:46.924 13:08:27 -- host/multipath.sh@65 -- # dtrace_pid=99053 00:23:46.924 13:08:27 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98596 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:46.924 13:08:27 -- host/multipath.sh@66 -- # sleep 6 00:23:53.488 13:08:33 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:53.488 13:08:33 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:53.488 13:08:33 -- host/multipath.sh@67 -- # active_port=4421 00:23:53.488 13:08:33 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.488 Attaching 4 probes... 00:23:53.488 @path[10.0.0.2, 4421]: 15853 00:23:53.488 @path[10.0.0.2, 4421]: 21305 00:23:53.488 @path[10.0.0.2, 4421]: 21570 00:23:53.488 @path[10.0.0.2, 4421]: 21075 00:23:53.488 @path[10.0.0.2, 4421]: 21513 00:23:53.488 13:08:33 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:53.488 13:08:33 -- host/multipath.sh@69 -- # sed -n 1p 00:23:53.488 13:08:33 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:53.488 13:08:33 -- host/multipath.sh@69 -- # port=4421 00:23:53.488 13:08:33 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:53.488 13:08:33 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:53.488 13:08:33 -- host/multipath.sh@72 -- # kill 99053 00:23:53.488 13:08:33 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.488 13:08:33 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:53.488 13:08:33 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:53.488 13:08:34 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:53.747 13:08:34 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:53.747 13:08:34 -- host/multipath.sh@65 -- # dtrace_pid=99185 00:23:53.747 13:08:34 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98596 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:53.747 13:08:34 -- host/multipath.sh@66 -- # sleep 6 00:24:00.309 13:08:40 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:00.309 13:08:40 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:00.309 13:08:40 -- host/multipath.sh@67 -- # active_port= 00:24:00.309 13:08:40 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.309 Attaching 4 probes... 00:24:00.309 00:24:00.309 00:24:00.309 00:24:00.309 00:24:00.309 00:24:00.309 13:08:40 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:00.309 13:08:40 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:00.309 13:08:40 -- host/multipath.sh@69 -- # sed -n 1p 00:24:00.309 13:08:40 -- host/multipath.sh@69 -- # port= 00:24:00.309 13:08:40 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:00.309 13:08:40 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:00.309 13:08:40 -- host/multipath.sh@72 -- # kill 99185 00:24:00.309 13:08:40 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.309 13:08:40 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:00.309 13:08:40 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:00.309 13:08:40 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:00.568 13:08:41 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:00.568 13:08:41 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98596 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:00.568 13:08:41 -- host/multipath.sh@65 -- # dtrace_pid=99315 00:24:00.568 13:08:41 -- host/multipath.sh@66 -- # sleep 6 00:24:07.134 13:08:47 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:07.134 13:08:47 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:07.134 13:08:47 -- host/multipath.sh@67 -- # active_port=4421 00:24:07.134 13:08:47 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.134 Attaching 4 probes... 00:24:07.134 @path[10.0.0.2, 4421]: 20119 00:24:07.134 @path[10.0.0.2, 4421]: 19938 00:24:07.134 @path[10.0.0.2, 4421]: 20157 00:24:07.134 @path[10.0.0.2, 4421]: 21003 00:24:07.134 @path[10.0.0.2, 4421]: 20530 00:24:07.134 13:08:47 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:07.134 13:08:47 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:07.134 13:08:47 -- host/multipath.sh@69 -- # sed -n 1p 00:24:07.134 13:08:47 -- host/multipath.sh@69 -- # port=4421 00:24:07.134 13:08:47 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:07.134 13:08:47 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:07.134 13:08:47 -- host/multipath.sh@72 -- # kill 99315 00:24:07.134 13:08:47 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.134 13:08:47 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:07.134 [2024-12-13 13:08:47.696737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.696994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.134 [2024-12-13 13:08:47.697174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 [2024-12-13 13:08:47.697592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3fe70 is same with the state(5) to be set 00:24:07.135 13:08:47 -- host/multipath.sh@101 -- # sleep 1 00:24:08.072 13:08:48 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:08.072 13:08:48 -- host/multipath.sh@65 -- # dtrace_pid=99451 00:24:08.072 13:08:48 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98596 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:08.072 13:08:48 -- host/multipath.sh@66 -- # sleep 6 00:24:14.638 13:08:54 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:14.638 13:08:54 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:14.638 13:08:55 -- host/multipath.sh@67 -- # active_port=4420 00:24:14.638 13:08:55 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:14.638 Attaching 4 probes... 00:24:14.638 @path[10.0.0.2, 4420]: 19937 00:24:14.638 @path[10.0.0.2, 4420]: 21007 00:24:14.638 @path[10.0.0.2, 4420]: 20969 00:24:14.638 @path[10.0.0.2, 4420]: 20755 00:24:14.638 @path[10.0.0.2, 4420]: 20798 00:24:14.638 13:08:55 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:14.638 13:08:55 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:14.638 13:08:55 -- host/multipath.sh@69 -- # sed -n 1p 00:24:14.638 13:08:55 -- host/multipath.sh@69 -- # port=4420 00:24:14.638 13:08:55 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:14.638 13:08:55 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:14.638 13:08:55 -- host/multipath.sh@72 -- # kill 99451 00:24:14.638 13:08:55 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:14.638 13:08:55 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:14.638 [2024-12-13 13:08:55.268956] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:14.638 13:08:55 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.896 13:08:55 -- host/multipath.sh@111 -- # sleep 6 00:24:21.462 13:09:01 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:21.462 13:09:01 -- host/multipath.sh@65 -- # dtrace_pid=99638 00:24:21.462 13:09:01 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98596 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:21.462 13:09:01 -- host/multipath.sh@66 -- # sleep 6 00:24:28.082 13:09:07 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:28.082 13:09:07 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:28.082 Attaching 4 probes... 00:24:28.082 @path[10.0.0.2, 4421]: 19894 00:24:28.082 @path[10.0.0.2, 4421]: 20183 00:24:28.082 @path[10.0.0.2, 4421]: 19442 00:24:28.082 @path[10.0.0.2, 4421]: 19995 00:24:28.082 @path[10.0.0.2, 4421]: 19905 00:24:28.082 13:09:07 -- host/multipath.sh@67 -- # active_port=4421 00:24:28.082 13:09:07 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:28.082 13:09:07 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:28.082 13:09:07 -- host/multipath.sh@69 -- # sed -n 1p 00:24:28.082 13:09:07 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:28.082 13:09:07 -- host/multipath.sh@69 -- # port=4421 00:24:28.082 13:09:07 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:28.082 13:09:07 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:28.082 13:09:07 -- host/multipath.sh@72 -- # kill 99638 00:24:28.082 13:09:07 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:28.082 13:09:07 -- host/multipath.sh@114 -- # killprocess 98694 00:24:28.082 13:09:07 -- common/autotest_common.sh@936 -- # '[' -z 98694 ']' 00:24:28.082 13:09:07 -- common/autotest_common.sh@940 -- # kill -0 98694 00:24:28.082 13:09:07 -- common/autotest_common.sh@941 -- # uname 00:24:28.082 13:09:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:28.082 13:09:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98694 00:24:28.082 13:09:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:28.082 killing process with pid 98694 00:24:28.082 13:09:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:28.082 13:09:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98694' 00:24:28.082 13:09:07 -- common/autotest_common.sh@955 -- # kill 98694 00:24:28.082 13:09:07 -- common/autotest_common.sh@960 -- # wait 98694 00:24:28.082 Connection closed with partial response: 00:24:28.082 00:24:28.082 00:24:28.082 13:09:08 -- host/multipath.sh@116 -- # wait 98694 00:24:28.082 13:09:08 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:28.082 [2024-12-13 13:08:10.492108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:28.082 [2024-12-13 13:08:10.492222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98694 ] 00:24:28.082 [2024-12-13 13:08:10.621269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.082 [2024-12-13 13:08:10.692022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.082 Running I/O for 90 seconds... 00:24:28.082 [2024-12-13 13:08:20.788011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.082 [2024-12-13 13:08:20.788079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.082 [2024-12-13 13:08:20.788450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.082 [2024-12-13 13:08:20.788499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.082 [2024-12-13 13:08:20.788533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.082 [2024-12-13 13:08:20.788563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.082 [2024-12-13 13:08:20.788594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.082 [2024-12-13 13:08:20.788656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.082 [2024-12-13 13:08:20.788674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.082 [2024-12-13 13:08:20.788687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.788705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.788718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.788736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.788765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.789410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.789449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.789905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.789941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.789963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.789977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.790357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.790390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.790422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.790454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.790494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.790562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.083 [2024-12-13 13:08:20.790728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.083 [2024-12-13 13:08:20.790761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.083 [2024-12-13 13:08:20.790780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.790831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.790855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.790870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.790891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.790905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.790925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.790946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.790967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.790982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.791201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.791734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.791972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.791991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.792103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.792170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.792202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.792235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.792268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.792301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.792334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.084 [2024-12-13 13:08:20.792451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.084 [2024-12-13 13:08:20.792733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.084 [2024-12-13 13:08:20.792774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.792795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.792823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.792845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.792859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.792878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.792892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.792912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.792926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.792946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.792959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.792984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.792999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.793067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.793101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.793320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.793582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.793648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.793722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.793984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.085 [2024-12-13 13:08:20.793997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.794017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.794030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.794049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.794063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.794093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.794108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.794127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.085 [2024-12-13 13:08:20.794141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.085 [2024-12-13 13:08:20.794160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.794173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.794193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.794206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.794225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.794239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.794258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.794271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.794296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.794310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.794950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.794976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.795016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.795049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.795182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.795306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.795343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.795820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.086 [2024-12-13 13:08:20.795899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.795963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.795982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.796000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.796018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.796031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.796050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.086 [2024-12-13 13:08:20.796063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.086 [2024-12-13 13:08:20.796082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.796815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.796941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.796955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.797447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.797495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.797867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.087 [2024-12-13 13:08:20.797898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.087 [2024-12-13 13:08:20.797937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.087 [2024-12-13 13:08:20.797957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.797970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.797989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.798814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.798972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.798992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.799005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.799023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.799040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.799059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.799072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.799091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.799112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.799167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.799182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.799215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.799231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.799251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.799266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.799286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.088 [2024-12-13 13:08:20.799300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.799321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.088 [2024-12-13 13:08:20.799335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.088 [2024-12-13 13:08:20.799356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.799370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.799390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.799404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.799439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.799467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.799486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.799500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.799524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.799539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.799558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.799572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.799591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.808489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.808575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.808627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.808671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.808706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.808741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.808812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.808848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.808884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.808920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.808956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.808977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.808991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.809012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.809027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.809048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.809063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.809857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.809898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.809927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.809944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.809965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.809980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.810015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.810085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.810216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.089 [2024-12-13 13:08:20.810249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.089 [2024-12-13 13:08:20.810575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.089 [2024-12-13 13:08:20.810589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.810621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.810654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.810688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.810721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.810770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.810829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.810864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.810900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.810935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.810970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.810991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.811718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.811859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.811874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.812404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.812428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.812452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.812468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.812487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.090 [2024-12-13 13:08:20.812500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.090 [2024-12-13 13:08:20.812518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.090 [2024-12-13 13:08:20.812531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.812562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.812610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.812642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.812674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.812716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.812767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.812818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.812871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.812907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.812942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.812977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.812997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.813012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.813047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.813082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.813148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.813191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.813306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.091 [2024-12-13 13:08:20.813637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.813675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.091 [2024-12-13 13:08:20.813694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.091 [2024-12-13 13:08:20.813707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.813725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.813738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.813772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.813803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.813824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.813849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.813872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.813887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.813907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.813922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.813942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.813967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.813988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.814246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.814511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.814575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.814646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.814963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.814985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.814999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.815020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.815034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.815055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.815084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.815152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.815170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.815191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.092 [2024-12-13 13:08:20.815206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.092 [2024-12-13 13:08:20.815227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.092 [2024-12-13 13:08:20.815241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.815262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.815277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.816180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.816212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.816244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.816307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.816422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.816454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.816893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.816961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.816981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.816995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.817030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.817064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.817128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.817175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.817206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.817237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.093 [2024-12-13 13:08:20.817276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.817309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.817341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.093 [2024-12-13 13:08:20.817389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.093 [2024-12-13 13:08:20.817408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.817421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.817454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.817485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.817518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.817550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.817599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.817632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.817664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.817705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.817740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.817791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.817826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.817877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.817913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.817948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.817969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.817984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.818633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.094 [2024-12-13 13:08:20.818665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.094 [2024-12-13 13:08:20.818937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.094 [2024-12-13 13:08:20.818952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.818972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.818987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.819913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.819969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.819984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.820019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.820055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.820120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.820154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.820193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.820229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.820262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.820311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.820356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.820405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.095 [2024-12-13 13:08:20.820437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.095 [2024-12-13 13:08:20.820456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.095 [2024-12-13 13:08:20.820469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.820748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.820849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.820933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.820968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.820989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.821004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.821039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.821075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.821140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.821195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.821226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.821257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.821288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.821320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.821351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.821383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.821414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.821433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.821446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.822217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.822256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.822288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.822329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.822363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.822394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.822425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.822456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.822487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.096 [2024-12-13 13:08:20.822518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.822550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.096 [2024-12-13 13:08:20.822581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.096 [2024-12-13 13:08:20.822600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.822648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.822680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.822712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.822767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.822815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.822854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.822889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.822934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.822969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.822983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.823053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.823154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.823531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.823579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.823611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.823643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.823674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.823764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.823963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.823978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.824000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.824014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.824035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.824049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.824070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.097 [2024-12-13 13:08:20.824085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.097 [2024-12-13 13:08:20.824106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.097 [2024-12-13 13:08:20.824171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.824191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.824204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.824222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.824235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.824255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.824275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.824731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.824770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.824813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.824845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.824867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.824882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.824903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.824918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.824939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.824953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.824974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.824989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.825345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.825424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.825456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.825488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.825520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.825553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.825585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.098 [2024-12-13 13:08:20.825691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.098 [2024-12-13 13:08:20.825918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.098 [2024-12-13 13:08:20.825932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.825953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.825967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.825988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.826124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.826180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.826211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.826274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.826337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.826368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.826565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.826890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.826961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.826982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.826996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.827046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.827084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.827131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.827167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.827203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.827238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.827274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.827309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.099 [2024-12-13 13:08:20.827350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.099 [2024-12-13 13:08:20.827372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-12-13 13:08:20.827402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.827421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.827450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.827470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.827483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.827509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.827536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.827556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.827593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.828403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.828499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.828531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.828562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.828625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.828731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.828796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.828973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.828994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.829294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-12-13 13:08:20.829356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-12-13 13:08:20.829607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.100 [2024-12-13 13:08:20.829626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.829644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.829663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.829677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.829695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.829708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.829726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.829739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.829790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.829804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.829835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.829851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.829872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.829886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.829906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.829920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.829940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.829954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.829974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.829988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.830023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.830056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.830112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.830160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.830192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.830223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.830254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.830286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.830790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.830847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.830883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.830919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.830954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.830975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.830990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.831367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-12-13 13:08:20.831417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.831480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.831511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.831552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.831598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.101 [2024-12-13 13:08:20.831616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.101 [2024-12-13 13:08:20.831629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.831661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.831692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.831723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.831770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.831820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.831871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.831905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.831939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.831973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.831993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.832022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.832044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.832059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.832079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.832093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.832128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.832156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.832175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.832203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.832222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.832235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.832253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.832265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.832284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.832297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.832315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.838589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.838639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.838657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.838678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.838692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.838711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.838724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.838771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.838820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.838845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.838860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.838881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.838895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.838931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.838945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.838966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.838980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.839048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.839372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.839471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-12-13 13:08:20.839519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.102 [2024-12-13 13:08:20.839538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.102 [2024-12-13 13:08:20.839551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.839583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.839630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.839661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.839693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.839724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.839771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.839828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.839899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.839935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.839970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.839990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.840004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.840025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.840039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.840736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.840779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.840805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.840836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.840859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.840872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.840892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.840905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.840924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.840937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.840956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.840969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.841017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.841050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.841128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.841222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.103 [2024-12-13 13:08:20.841253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-12-13 13:08:20.841512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.103 [2024-12-13 13:08:20.841530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.841637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.841700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.841984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.841998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.842202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.842272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.842304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.842335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.842366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.842397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.842491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.842541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.842554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.843123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.843159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.843185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.843202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.843235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.843251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.843272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.843287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.843308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.843323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.843344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.843358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.843379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.104 [2024-12-13 13:08:20.843393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.843414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.843428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.104 [2024-12-13 13:08:20.843449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.104 [2024-12-13 13:08:20.843463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.843513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.843562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.843625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.843659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.843693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.843735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.843788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.843840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.843878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.843928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.843962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.843997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.105 [2024-12-13 13:08:20.844938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.844971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.844990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.845004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.845023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.845036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.105 [2024-12-13 13:08:20.845055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.105 [2024-12-13 13:08:20.845084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.845180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.845476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.845545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.845627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.845974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.845993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.846007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.846042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.846056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.846077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.846091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.846823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.846851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.846895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.846938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.846990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.847004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.847055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.847090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.847147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.847183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.847218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.847252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.847287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.847322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.847357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.106 [2024-12-13 13:08:20.847392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.106 [2024-12-13 13:08:20.847434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.106 [2024-12-13 13:08:20.847457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.847472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.847938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.847967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.847982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.848050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.848409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.848470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.848505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.848540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.848575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.848645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-12-13 13:08:20.848819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.107 [2024-12-13 13:08:20.848840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.107 [2024-12-13 13:08:20.848854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.848885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.848909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.848933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.848948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.848969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.848983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.849553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.849595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.849631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.849666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.849702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.849736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.849786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.849826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.849861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.849921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.849960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.849981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.849996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.850229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.850300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.850335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.850408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.850488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.850543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.850603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.108 [2024-12-13 13:08:20.850772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.850976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.850990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.851031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.851062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.851116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-12-13 13:08:20.851148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.108 [2024-12-13 13:08:20.851178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.851268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.851303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.851339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.851417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.851488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.851524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.851763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.851965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.851979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.852060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.852131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.852201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-12-13 13:08:20.852498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.852534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.852555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.852570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.853476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.109 [2024-12-13 13:08:20.853506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.109 [2024-12-13 13:08:20.853546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.853567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.853604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.853640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.853676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.853712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.853762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.853810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.853845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.853896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.853932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.853968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.853990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.854076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.854112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.854545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.854616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.110 [2024-12-13 13:08:20.854923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.854958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.854979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.854994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.855014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.110 [2024-12-13 13:08:20.855029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.110 [2024-12-13 13:08:20.855049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:20.855064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:20.855099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:20.855149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:20.855184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:20.855227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:20.855265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:20.855300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:20.855335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:20.855371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:20.855406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:20.855441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.855462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:20.855476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:20.856084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:20.856112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.362988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:27.363053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:27.363211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:27.363300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:27.363542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:27.363648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.363966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.363981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.364001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:27.364016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.364037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.364052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.364088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.111 [2024-12-13 13:08:27.364102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.364139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:27.364167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.111 [2024-12-13 13:08:27.364186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.111 [2024-12-13 13:08:27.364199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.365770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.365967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.365991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.366013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.366051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.366090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.366180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.366215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.366251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.366286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.366320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.366356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.366405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.112 [2024-12-13 13:08:27.366439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.112 [2024-12-13 13:08:27.366473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.112 [2024-12-13 13:08:27.366494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.366507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.366542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.366576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.366616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.366651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.366686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.366720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.366771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.366824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.366889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.366928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.366967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.366982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.367468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.367550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.367606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.367724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.367796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.367916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.367956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.367973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.368016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.368060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.368103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.368160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.368225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.368267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.368321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.368360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.368399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.113 [2024-12-13 13:08:27.368438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.113 [2024-12-13 13:08:27.368464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.113 [2024-12-13 13:08:27.368478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:27.368510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:27.368525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:27.368551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:27.368565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:27.368591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:27.368604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:27.368630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:27.368643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:27.368669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:27.368683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:27.368708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:27.368722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:27.368763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:27.368793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:27.368820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:27.368865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.424633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.425293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.425432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.425520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.425603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.425698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.425824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.425930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.426028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.426198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.426279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.426360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.426391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.426406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.426723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.426763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.426805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.426821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.426869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.426904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.426927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.426942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.426972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.426988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.427060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.427148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.427239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.427277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.427315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.427480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.427514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.427563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.114 [2024-12-13 13:08:34.427639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.114 [2024-12-13 13:08:34.427853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.114 [2024-12-13 13:08:34.427871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.427904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.427919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.427941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.427956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.427982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.427999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.428113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.428302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.428368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.428402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.428468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.428663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.428805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.428972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.428994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.429009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.429047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.429085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.429123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.429205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.429245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.429280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.429311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.429347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.115 [2024-12-13 13:08:34.429379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.115 [2024-12-13 13:08:34.429411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.115 [2024-12-13 13:08:34.429431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.429444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.429476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.429508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.429540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.429572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.429604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.429641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.429675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.429709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.429742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.429806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.429856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.429892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.429926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.429963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.429985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.429999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.430878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.116 [2024-12-13 13:08:34.430960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.116 [2024-12-13 13:08:34.430986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.116 [2024-12-13 13:08:34.431000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.431039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.431163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.431248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.431290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.431432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.431596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.431671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.431747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.431916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.431942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:34.431963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:34.432001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.117 [2024-12-13 13:08:34.432016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.697884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.697935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.697962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.697978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.697993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.117 [2024-12-13 13:08:47.698512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.117 [2024-12-13 13:08:47.698524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.698639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.698715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.698975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.698989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.118 [2024-12-13 13:08:47.699687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.118 [2024-12-13 13:08:47.699832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.118 [2024-12-13 13:08:47.699845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.699859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.119 [2024-12-13 13:08:47.699873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.699887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.699905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.699920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.699933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.699947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.699959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.699973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.119 [2024-12-13 13:08:47.699985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.699999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.119 [2024-12-13 13:08:47.700527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.119 [2024-12-13 13:08:47.700576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.119 [2024-12-13 13:08:47.700842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.119 [2024-12-13 13:08:47.700894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.119 [2024-12-13 13:08:47.700920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.119 [2024-12-13 13:08:47.700952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.119 [2024-12-13 13:08:47.700966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.119 [2024-12-13 13:08:47.700979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.700992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.120 [2024-12-13 13:08:47.701004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.120 [2024-12-13 13:08:47.701056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.120 [2024-12-13 13:08:47.701113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.120 [2024-12-13 13:08:47.701321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.120 [2024-12-13 13:08:47.701345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.120 [2024-12-13 13:08:47.701442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.120 [2024-12-13 13:08:47.701679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ef120 is same with the state(5) to be set 00:24:28.120 [2024-12-13 13:08:47.701706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.120 [2024-12-13 13:08:47.701716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.120 [2024-12-13 13:08:47.701725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56248 len:8 PRP1 0x0 PRP2 0x0 00:24:28.120 [2024-12-13 13:08:47.701737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.120 [2024-12-13 13:08:47.701825] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10ef120 was disconnected and freed. reset controller. 00:24:28.120 [2024-12-13 13:08:47.702998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.120 [2024-12-13 13:08:47.703149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb60 (9): Bad file descriptor 00:24:28.120 [2024-12-13 13:08:47.703290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.120 [2024-12-13 13:08:47.703344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.120 [2024-12-13 13:08:47.703366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb60 with addr=10.0.0.2, port=4421 00:24:28.120 [2024-12-13 13:08:47.703380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb60 is same with the state(5) to be set 00:24:28.120 [2024-12-13 13:08:47.703403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb60 (9): Bad file descriptor 00:24:28.120 [2024-12-13 13:08:47.703425] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.120 [2024-12-13 13:08:47.703439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.120 [2024-12-13 13:08:47.703468] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.120 [2024-12-13 13:08:47.703490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.120 [2024-12-13 13:08:47.703504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.120 [2024-12-13 13:08:57.769499] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:28.120 Received shutdown signal, test time was about 55.350455 seconds 00:24:28.120 00:24:28.120 Latency(us) 00:24:28.120 [2024-12-13T13:09:08.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.120 [2024-12-13T13:09:08.896Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:28.121 Verification LBA range: start 0x0 length 0x4000 00:24:28.121 Nvme0n1 : 55.35 11867.20 46.36 0.00 0.00 10771.66 327.68 7046430.72 00:24:28.121 [2024-12-13T13:09:08.897Z] =================================================================================================================== 00:24:28.121 [2024-12-13T13:09:08.897Z] Total : 11867.20 46.36 0.00 0.00 10771.66 327.68 7046430.72 00:24:28.121 13:09:08 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.121 13:09:08 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:28.121 13:09:08 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:28.121 13:09:08 -- host/multipath.sh@125 -- # nvmftestfini 00:24:28.121 13:09:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:28.121 13:09:08 -- nvmf/common.sh@116 -- # sync 00:24:28.121 13:09:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:28.121 13:09:08 -- nvmf/common.sh@119 -- # set +e 00:24:28.121 13:09:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:28.121 13:09:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:28.121 rmmod nvme_tcp 00:24:28.121 rmmod nvme_fabrics 00:24:28.121 rmmod nvme_keyring 00:24:28.121 13:09:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:28.121 13:09:08 -- nvmf/common.sh@123 -- # set -e 00:24:28.121 13:09:08 -- nvmf/common.sh@124 -- # return 0 00:24:28.121 13:09:08 -- nvmf/common.sh@477 -- # '[' -n 98596 ']' 00:24:28.121 13:09:08 -- nvmf/common.sh@478 -- # killprocess 98596 00:24:28.121 13:09:08 -- common/autotest_common.sh@936 -- # '[' -z 98596 ']' 00:24:28.121 13:09:08 -- common/autotest_common.sh@940 -- # kill -0 98596 00:24:28.121 13:09:08 -- common/autotest_common.sh@941 -- # uname 00:24:28.121 13:09:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:28.121 13:09:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98596 00:24:28.121 13:09:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:28.121 13:09:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:28.121 killing process with pid 98596 00:24:28.121 13:09:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98596' 00:24:28.121 13:09:08 -- common/autotest_common.sh@955 -- # kill 98596 00:24:28.121 13:09:08 -- common/autotest_common.sh@960 -- # wait 98596 00:24:28.121 13:09:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:28.121 13:09:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:28.121 13:09:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:28.121 13:09:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.121 13:09:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:28.121 13:09:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.121 13:09:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.121 13:09:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.121 13:09:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:28.121 ************************************ 00:24:28.121 END TEST nvmf_multipath 00:24:28.121 ************************************ 00:24:28.121 00:24:28.121 real 1m1.510s 00:24:28.121 user 2m53.195s 00:24:28.121 sys 0m14.044s 00:24:28.121 13:09:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:28.121 13:09:08 -- common/autotest_common.sh@10 -- # set +x 00:24:28.121 13:09:08 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:28.121 13:09:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:28.121 13:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:28.121 13:09:08 -- common/autotest_common.sh@10 -- # set +x 00:24:28.121 ************************************ 00:24:28.121 START TEST nvmf_timeout 00:24:28.121 ************************************ 00:24:28.121 13:09:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:28.121 * Looking for test storage... 00:24:28.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:28.121 13:09:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:28.121 13:09:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:28.121 13:09:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:28.380 13:09:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:28.380 13:09:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:28.380 13:09:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:28.380 13:09:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:28.380 13:09:08 -- scripts/common.sh@335 -- # IFS=.-: 00:24:28.380 13:09:08 -- scripts/common.sh@335 -- # read -ra ver1 00:24:28.380 13:09:08 -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.380 13:09:08 -- scripts/common.sh@336 -- # read -ra ver2 00:24:28.380 13:09:08 -- scripts/common.sh@337 -- # local 'op=<' 00:24:28.380 13:09:08 -- scripts/common.sh@339 -- # ver1_l=2 00:24:28.380 13:09:08 -- scripts/common.sh@340 -- # ver2_l=1 00:24:28.380 13:09:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:28.380 13:09:08 -- scripts/common.sh@343 -- # case "$op" in 00:24:28.380 13:09:08 -- scripts/common.sh@344 -- # : 1 00:24:28.380 13:09:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:28.380 13:09:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.380 13:09:08 -- scripts/common.sh@364 -- # decimal 1 00:24:28.380 13:09:08 -- scripts/common.sh@352 -- # local d=1 00:24:28.380 13:09:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.380 13:09:08 -- scripts/common.sh@354 -- # echo 1 00:24:28.380 13:09:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:28.380 13:09:08 -- scripts/common.sh@365 -- # decimal 2 00:24:28.380 13:09:08 -- scripts/common.sh@352 -- # local d=2 00:24:28.380 13:09:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.380 13:09:08 -- scripts/common.sh@354 -- # echo 2 00:24:28.380 13:09:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:28.380 13:09:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:28.380 13:09:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:28.380 13:09:08 -- scripts/common.sh@367 -- # return 0 00:24:28.380 13:09:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.380 13:09:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:28.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.380 --rc genhtml_branch_coverage=1 00:24:28.380 --rc genhtml_function_coverage=1 00:24:28.380 --rc genhtml_legend=1 00:24:28.380 --rc geninfo_all_blocks=1 00:24:28.380 --rc geninfo_unexecuted_blocks=1 00:24:28.380 00:24:28.380 ' 00:24:28.380 13:09:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:28.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.380 --rc genhtml_branch_coverage=1 00:24:28.380 --rc genhtml_function_coverage=1 00:24:28.380 --rc genhtml_legend=1 00:24:28.380 --rc geninfo_all_blocks=1 00:24:28.380 --rc geninfo_unexecuted_blocks=1 00:24:28.380 00:24:28.380 ' 00:24:28.380 13:09:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:28.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.380 --rc genhtml_branch_coverage=1 00:24:28.380 --rc genhtml_function_coverage=1 00:24:28.380 --rc genhtml_legend=1 00:24:28.380 --rc geninfo_all_blocks=1 00:24:28.380 --rc geninfo_unexecuted_blocks=1 00:24:28.380 00:24:28.380 ' 00:24:28.380 13:09:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:28.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.380 --rc genhtml_branch_coverage=1 00:24:28.380 --rc genhtml_function_coverage=1 00:24:28.380 --rc genhtml_legend=1 00:24:28.380 --rc geninfo_all_blocks=1 00:24:28.380 --rc geninfo_unexecuted_blocks=1 00:24:28.380 00:24:28.380 ' 00:24:28.380 13:09:08 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.380 13:09:08 -- nvmf/common.sh@7 -- # uname -s 00:24:28.380 13:09:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.380 13:09:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.380 13:09:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.380 13:09:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.380 13:09:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.380 13:09:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.380 13:09:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.380 13:09:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.380 13:09:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.380 13:09:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.380 13:09:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:24:28.380 13:09:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:24:28.380 13:09:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.380 13:09:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.380 13:09:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.380 13:09:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.381 13:09:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.381 13:09:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.381 13:09:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.381 13:09:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.381 13:09:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.381 13:09:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.381 13:09:08 -- paths/export.sh@5 -- # export PATH 00:24:28.381 13:09:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.381 13:09:08 -- nvmf/common.sh@46 -- # : 0 00:24:28.381 13:09:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:28.381 13:09:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:28.381 13:09:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:28.381 13:09:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.381 13:09:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.381 13:09:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:28.381 13:09:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:28.381 13:09:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:28.381 13:09:08 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.381 13:09:08 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.381 13:09:08 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:28.381 13:09:08 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:28.381 13:09:08 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.381 13:09:08 -- host/timeout.sh@19 -- # nvmftestinit 00:24:28.381 13:09:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:28.381 13:09:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.381 13:09:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:28.381 13:09:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:28.381 13:09:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:28.381 13:09:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.381 13:09:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.381 13:09:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.381 13:09:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:28.381 13:09:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:28.381 13:09:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:28.381 13:09:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:28.381 13:09:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:28.381 13:09:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:28.381 13:09:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.381 13:09:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.381 13:09:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:28.381 13:09:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:28.381 13:09:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.381 13:09:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.381 13:09:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.381 13:09:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.381 13:09:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.381 13:09:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.381 13:09:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.381 13:09:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.381 13:09:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:28.381 13:09:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:28.381 Cannot find device "nvmf_tgt_br" 00:24:28.381 13:09:08 -- nvmf/common.sh@154 -- # true 00:24:28.381 13:09:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.381 Cannot find device "nvmf_tgt_br2" 00:24:28.381 13:09:08 -- nvmf/common.sh@155 -- # true 00:24:28.381 13:09:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:28.381 13:09:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:28.381 Cannot find device "nvmf_tgt_br" 00:24:28.381 13:09:08 -- nvmf/common.sh@157 -- # true 00:24:28.381 13:09:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:28.381 Cannot find device "nvmf_tgt_br2" 00:24:28.381 13:09:09 -- nvmf/common.sh@158 -- # true 00:24:28.381 13:09:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:28.381 13:09:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:28.381 13:09:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.381 13:09:09 -- nvmf/common.sh@161 -- # true 00:24:28.381 13:09:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.381 13:09:09 -- nvmf/common.sh@162 -- # true 00:24:28.381 13:09:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.381 13:09:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.381 13:09:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.381 13:09:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.381 13:09:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.381 13:09:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.381 13:09:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.381 13:09:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:28.381 13:09:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:28.381 13:09:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:28.381 13:09:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:28.381 13:09:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:28.381 13:09:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:28.381 13:09:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.640 13:09:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.640 13:09:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.640 13:09:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:28.640 13:09:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:28.640 13:09:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.640 13:09:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.640 13:09:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.640 13:09:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.640 13:09:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.640 13:09:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:28.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:24:28.640 00:24:28.640 --- 10.0.0.2 ping statistics --- 00:24:28.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.640 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:24:28.640 13:09:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:28.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:24:28.640 00:24:28.640 --- 10.0.0.3 ping statistics --- 00:24:28.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.640 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:28.640 13:09:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:28.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:28.640 00:24:28.640 --- 10.0.0.1 ping statistics --- 00:24:28.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.640 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:28.640 13:09:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.640 13:09:09 -- nvmf/common.sh@421 -- # return 0 00:24:28.640 13:09:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:28.640 13:09:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.640 13:09:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:28.640 13:09:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:28.640 13:09:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.640 13:09:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:28.640 13:09:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:28.640 13:09:09 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:28.640 13:09:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:28.640 13:09:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.640 13:09:09 -- common/autotest_common.sh@10 -- # set +x 00:24:28.640 13:09:09 -- nvmf/common.sh@469 -- # nvmfpid=99971 00:24:28.640 13:09:09 -- nvmf/common.sh@470 -- # waitforlisten 99971 00:24:28.640 13:09:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:28.640 13:09:09 -- common/autotest_common.sh@829 -- # '[' -z 99971 ']' 00:24:28.640 13:09:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.640 13:09:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.640 13:09:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.640 13:09:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.640 13:09:09 -- common/autotest_common.sh@10 -- # set +x 00:24:28.640 [2024-12-13 13:09:09.310336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:28.640 [2024-12-13 13:09:09.310444] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.899 [2024-12-13 13:09:09.435345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:28.899 [2024-12-13 13:09:09.494165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:28.899 [2024-12-13 13:09:09.494331] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.899 [2024-12-13 13:09:09.494360] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.899 [2024-12-13 13:09:09.494384] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.899 [2024-12-13 13:09:09.494523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.899 [2024-12-13 13:09:09.494535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.835 13:09:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.835 13:09:10 -- common/autotest_common.sh@862 -- # return 0 00:24:29.835 13:09:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:29.835 13:09:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:29.835 13:09:10 -- common/autotest_common.sh@10 -- # set +x 00:24:29.835 13:09:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.835 13:09:10 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:29.835 13:09:10 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:30.094 [2024-12-13 13:09:10.675113] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.094 13:09:10 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:30.353 Malloc0 00:24:30.353 13:09:11 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.611 13:09:11 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:30.870 13:09:11 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.129 [2024-12-13 13:09:11.702636] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.129 13:09:11 -- host/timeout.sh@32 -- # bdevperf_pid=100063 00:24:31.129 13:09:11 -- host/timeout.sh@34 -- # waitforlisten 100063 /var/tmp/bdevperf.sock 00:24:31.129 13:09:11 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:31.129 13:09:11 -- common/autotest_common.sh@829 -- # '[' -z 100063 ']' 00:24:31.129 13:09:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.129 13:09:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.129 13:09:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.129 13:09:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.129 13:09:11 -- common/autotest_common.sh@10 -- # set +x 00:24:31.129 [2024-12-13 13:09:11.777734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:31.129 [2024-12-13 13:09:11.777870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100063 ] 00:24:31.388 [2024-12-13 13:09:11.910427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.388 [2024-12-13 13:09:11.975909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.955 13:09:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.955 13:09:12 -- common/autotest_common.sh@862 -- # return 0 00:24:31.955 13:09:12 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:32.214 13:09:12 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:32.473 NVMe0n1 00:24:32.473 13:09:13 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:32.473 13:09:13 -- host/timeout.sh@51 -- # rpc_pid=100111 00:24:32.473 13:09:13 -- host/timeout.sh@53 -- # sleep 1 00:24:32.731 Running I/O for 10 seconds... 00:24:33.670 13:09:14 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.670 [2024-12-13 13:09:14.398681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.398999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.399007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.399015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.399023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.399032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.399041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.399048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.399056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.670 [2024-12-13 13:09:14.399064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a60 is same with the state(5) to be set 00:24:33.671 [2024-12-13 13:09:14.399585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.399982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.399993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.671 [2024-12-13 13:09:14.400197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.671 [2024-12-13 13:09:14.400207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.400951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.400982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.400991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.401002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.401012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.401022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.672 [2024-12-13 13:09:14.401031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.672 [2024-12-13 13:09:14.401042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.672 [2024-12-13 13:09:14.401051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.673 [2024-12-13 13:09:14.401902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.673 [2024-12-13 13:09:14.401913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.673 [2024-12-13 13:09:14.401928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.401939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.401948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.401959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.674 [2024-12-13 13:09:14.401968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.401979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.401988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.401999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.674 [2024-12-13 13:09:14.402027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.674 [2024-12-13 13:09:14.402067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.674 [2024-12-13 13:09:14.402086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.674 [2024-12-13 13:09:14.402164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.674 [2024-12-13 13:09:14.402184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.674 [2024-12-13 13:09:14.402324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fcb80 is same with the state(5) to be set 00:24:33.674 [2024-12-13 13:09:14.402345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.674 [2024-12-13 13:09:14.402353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.674 [2024-12-13 13:09:14.402361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130416 len:8 PRP1 0x0 PRP2 0x0 00:24:33.674 [2024-12-13 13:09:14.402370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402421] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21fcb80 was disconnected and freed. reset controller. 00:24:33.674 [2024-12-13 13:09:14.402527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.674 [2024-12-13 13:09:14.402550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.674 [2024-12-13 13:09:14.402571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.674 [2024-12-13 13:09:14.402590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.674 [2024-12-13 13:09:14.402609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.674 [2024-12-13 13:09:14.402617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cb250 is same with the state(5) to be set 00:24:33.674 [2024-12-13 13:09:14.402857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.674 [2024-12-13 13:09:14.402888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cb250 (9): Bad file descriptor 00:24:33.674 [2024-12-13 13:09:14.402999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.674 [2024-12-13 13:09:14.403047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.674 [2024-12-13 13:09:14.403063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21cb250 with addr=10.0.0.2, port=4420 00:24:33.674 [2024-12-13 13:09:14.403073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cb250 is same with the state(5) to be set 00:24:33.674 [2024-12-13 13:09:14.403091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cb250 (9): Bad file descriptor 00:24:33.674 [2024-12-13 13:09:14.403128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.674 [2024-12-13 13:09:14.403147] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.674 [2024-12-13 13:09:14.403157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.674 [2024-12-13 13:09:14.415000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.674 [2024-12-13 13:09:14.415036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.674 13:09:14 -- host/timeout.sh@56 -- # sleep 2 00:24:36.208 [2024-12-13 13:09:16.415160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.208 [2024-12-13 13:09:16.415247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.208 [2024-12-13 13:09:16.415265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21cb250 with addr=10.0.0.2, port=4420 00:24:36.208 [2024-12-13 13:09:16.415277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cb250 is same with the state(5) to be set 00:24:36.208 [2024-12-13 13:09:16.415299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cb250 (9): Bad file descriptor 00:24:36.208 [2024-12-13 13:09:16.415316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.208 [2024-12-13 13:09:16.415325] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.208 [2024-12-13 13:09:16.415335] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.208 [2024-12-13 13:09:16.415358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.208 [2024-12-13 13:09:16.415369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.208 13:09:16 -- host/timeout.sh@57 -- # get_controller 00:24:36.208 13:09:16 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.208 13:09:16 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:36.208 13:09:16 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:36.208 13:09:16 -- host/timeout.sh@58 -- # get_bdev 00:24:36.208 13:09:16 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:36.208 13:09:16 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:36.467 13:09:17 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:36.467 13:09:17 -- host/timeout.sh@61 -- # sleep 5 00:24:37.843 [2024-12-13 13:09:18.415488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.843 [2024-12-13 13:09:18.415624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.843 [2024-12-13 13:09:18.415641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21cb250 with addr=10.0.0.2, port=4420 00:24:37.843 [2024-12-13 13:09:18.415653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cb250 is same with the state(5) to be set 00:24:37.843 [2024-12-13 13:09:18.415676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cb250 (9): Bad file descriptor 00:24:37.843 [2024-12-13 13:09:18.415694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.843 [2024-12-13 13:09:18.415703] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.843 [2024-12-13 13:09:18.415728] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.843 [2024-12-13 13:09:18.415785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.843 [2024-12-13 13:09:18.415796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.746 [2024-12-13 13:09:20.415830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.746 [2024-12-13 13:09:20.415883] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.746 [2024-12-13 13:09:20.415910] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.746 [2024-12-13 13:09:20.415918] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:39.746 [2024-12-13 13:09:20.415950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.684 00:24:40.684 Latency(us) 00:24:40.684 [2024-12-13T13:09:21.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.684 [2024-12-13T13:09:21.460Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:40.684 Verification LBA range: start 0x0 length 0x4000 00:24:40.684 NVMe0n1 : 8.13 1997.04 7.80 15.74 0.00 63503.77 2517.18 7015926.69 00:24:40.684 [2024-12-13T13:09:21.460Z] =================================================================================================================== 00:24:40.684 [2024-12-13T13:09:21.460Z] Total : 1997.04 7.80 15.74 0.00 63503.77 2517.18 7015926.69 00:24:40.684 0 00:24:41.251 13:09:22 -- host/timeout.sh@62 -- # get_controller 00:24:41.251 13:09:22 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.251 13:09:22 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:41.510 13:09:22 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:41.510 13:09:22 -- host/timeout.sh@63 -- # get_bdev 00:24:41.510 13:09:22 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:41.510 13:09:22 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:42.077 13:09:22 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:42.077 13:09:22 -- host/timeout.sh@65 -- # wait 100111 00:24:42.077 13:09:22 -- host/timeout.sh@67 -- # killprocess 100063 00:24:42.077 13:09:22 -- common/autotest_common.sh@936 -- # '[' -z 100063 ']' 00:24:42.077 13:09:22 -- common/autotest_common.sh@940 -- # kill -0 100063 00:24:42.077 13:09:22 -- common/autotest_common.sh@941 -- # uname 00:24:42.077 13:09:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:42.077 13:09:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100063 00:24:42.077 13:09:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:42.077 killing process with pid 100063 00:24:42.077 13:09:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:42.077 13:09:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100063' 00:24:42.077 Received shutdown signal, test time was about 9.334808 seconds 00:24:42.077 00:24:42.077 Latency(us) 00:24:42.077 [2024-12-13T13:09:22.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.077 [2024-12-13T13:09:22.853Z] =================================================================================================================== 00:24:42.077 [2024-12-13T13:09:22.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.077 13:09:22 -- common/autotest_common.sh@955 -- # kill 100063 00:24:42.077 13:09:22 -- common/autotest_common.sh@960 -- # wait 100063 00:24:42.077 13:09:22 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.345 [2024-12-13 13:09:23.070924] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.345 13:09:23 -- host/timeout.sh@74 -- # bdevperf_pid=100264 00:24:42.345 13:09:23 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:42.345 13:09:23 -- host/timeout.sh@76 -- # waitforlisten 100264 /var/tmp/bdevperf.sock 00:24:42.345 13:09:23 -- common/autotest_common.sh@829 -- # '[' -z 100264 ']' 00:24:42.345 13:09:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.345 13:09:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.345 13:09:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.345 13:09:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.345 13:09:23 -- common/autotest_common.sh@10 -- # set +x 00:24:42.637 [2024-12-13 13:09:23.142245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:42.637 [2024-12-13 13:09:23.142335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100264 ] 00:24:42.637 [2024-12-13 13:09:23.274461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.637 [2024-12-13 13:09:23.337636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.585 13:09:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.585 13:09:24 -- common/autotest_common.sh@862 -- # return 0 00:24:43.585 13:09:24 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:43.844 13:09:24 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:44.103 NVMe0n1 00:24:44.103 13:09:24 -- host/timeout.sh@84 -- # rpc_pid=100313 00:24:44.103 13:09:24 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.103 13:09:24 -- host/timeout.sh@86 -- # sleep 1 00:24:44.103 Running I/O for 10 seconds... 00:24:45.037 13:09:25 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.298 [2024-12-13 13:09:26.012877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.013586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.013688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.013841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.013913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.013982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.014050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.014127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.014221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.014284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.298 [2024-12-13 13:09:26.014358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.014419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.014479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.014539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.014605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.014680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.014783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.014863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.014942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015257] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015768] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.015967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcd90 is same with the state(5) to be set 00:24:45.299 [2024-12-13 13:09:26.016333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.016987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.016998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.017007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.299 [2024-12-13 13:09:26.017018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.299 [2024-12-13 13:09:26.017028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.300 [2024-12-13 13:09:26.017150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.300 [2024-12-13 13:09:26.017171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.300 [2024-12-13 13:09:26.017191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.300 [2024-12-13 13:09:26.017296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.300 [2024-12-13 13:09:26.017638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.300 [2024-12-13 13:09:26.017658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.300 [2024-12-13 13:09:26.017696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.300 [2024-12-13 13:09:26.017863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.300 [2024-12-13 13:09:26.017884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.300 [2024-12-13 13:09:26.017939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.300 [2024-12-13 13:09:26.017950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.017960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.017971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.017980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.017991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.301 [2024-12-13 13:09:26.018734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.301 [2024-12-13 13:09:26.018812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.301 [2024-12-13 13:09:26.018822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.018833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.302 [2024-12-13 13:09:26.018842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.018853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.018862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.018872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.018881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.018892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.302 [2024-12-13 13:09:26.018901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.018911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.018920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.018931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.302 [2024-12-13 13:09:26.018940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.018951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.018959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.018970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.302 [2024-12-13 13:09:26.018979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.018989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.018998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.019009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.019018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.019029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.019038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.019049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.019063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.019074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.019083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.019093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.019111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.019140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.019150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.019161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.302 [2024-12-13 13:09:26.019169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.019180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefe9c0 is same with the state(5) to be set 00:24:45.302 [2024-12-13 13:09:26.019192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:45.302 [2024-12-13 13:09:26.019199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:45.302 [2024-12-13 13:09:26.019207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10696 len:8 PRP1 0x0 PRP2 0x0 00:24:45.302 [2024-12-13 13:09:26.019216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.302 [2024-12-13 13:09:26.019268] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xefe9c0 was disconnected and freed. reset controller. 00:24:45.302 [2024-12-13 13:09:26.019529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.302 [2024-12-13 13:09:26.019622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd090 (9): Bad file descriptor 00:24:45.302 [2024-12-13 13:09:26.026238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd090 (9): Bad file descriptor 00:24:45.302 [2024-12-13 13:09:26.026288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.302 [2024-12-13 13:09:26.026302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.302 [2024-12-13 13:09:26.026312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.302 [2024-12-13 13:09:26.026330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.302 [2024-12-13 13:09:26.026341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.302 13:09:26 -- host/timeout.sh@90 -- # sleep 1 00:24:46.679 [2024-12-13 13:09:27.026431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-12-13 13:09:27.026852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-12-13 13:09:27.026974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd090 with addr=10.0.0.2, port=4420 00:24:46.679 [2024-12-13 13:09:27.027080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd090 is same with the state(5) to be set 00:24:46.679 [2024-12-13 13:09:27.027205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd090 (9): Bad file descriptor 00:24:46.679 [2024-12-13 13:09:27.027298] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.679 [2024-12-13 13:09:27.027386] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.679 [2024-12-13 13:09:27.027453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.679 [2024-12-13 13:09:27.027545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.679 [2024-12-13 13:09:27.027628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.679 13:09:27 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.679 [2024-12-13 13:09:27.315430] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.679 13:09:27 -- host/timeout.sh@92 -- # wait 100313 00:24:47.615 [2024-12-13 13:09:28.047184] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:54.179 00:24:54.179 Latency(us) 00:24:54.179 [2024-12-13T13:09:34.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.179 [2024-12-13T13:09:34.955Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:54.179 Verification LBA range: start 0x0 length 0x4000 00:24:54.179 NVMe0n1 : 10.01 10728.77 41.91 0.00 0.00 11907.09 1161.77 3019898.88 00:24:54.179 [2024-12-13T13:09:34.955Z] =================================================================================================================== 00:24:54.179 [2024-12-13T13:09:34.955Z] Total : 10728.77 41.91 0.00 0.00 11907.09 1161.77 3019898.88 00:24:54.179 0 00:24:54.179 13:09:34 -- host/timeout.sh@97 -- # rpc_pid=100435 00:24:54.179 13:09:34 -- host/timeout.sh@98 -- # sleep 1 00:24:54.179 13:09:34 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:54.438 Running I/O for 10 seconds... 00:24:55.374 13:09:35 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.374 [2024-12-13 13:09:36.145403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.146942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.147979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.148956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149716] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.149960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.150032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.150096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.374 [2024-12-13 13:09:36.150158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.375 [2024-12-13 13:09:36.150238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.375 [2024-12-13 13:09:36.150317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.375 [2024-12-13 13:09:36.150396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.150509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.150590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.150685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.150778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.150887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.150990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.151061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.151150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.151218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.151282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.151355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6569c0 is same with the state(5) to be set 00:24:55.636 [2024-12-13 13:09:36.151710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.151989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.151998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.636 [2024-12-13 13:09:36.152269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.636 [2024-12-13 13:09:36.152279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.637 [2024-12-13 13:09:36.152721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.637 [2024-12-13 13:09:36.152772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.637 [2024-12-13 13:09:36.152814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.637 [2024-12-13 13:09:36.152835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.637 [2024-12-13 13:09:36.152855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.637 [2024-12-13 13:09:36.152875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.152981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.152992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.153001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.153012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.153021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.153032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.153041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.153052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.637 [2024-12-13 13:09:36.153061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.153072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.637 [2024-12-13 13:09:36.153081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.153093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.153102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.153128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.153138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.153149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.637 [2024-12-13 13:09:36.153173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.637 [2024-12-13 13:09:36.153183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.637 [2024-12-13 13:09:36.153192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.153900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.153987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.153998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.154006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.154017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.154027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.154038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.154047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.154058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.154067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.154078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.638 [2024-12-13 13:09:36.154087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.638 [2024-12-13 13:09:36.154098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.638 [2024-12-13 13:09:36.154107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.639 [2024-12-13 13:09:36.154126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.639 [2024-12-13 13:09:36.154240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.639 [2024-12-13 13:09:36.154261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.639 [2024-12-13 13:09:36.154286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.639 [2024-12-13 13:09:36.154351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.639 [2024-12-13 13:09:36.154431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.639 [2024-12-13 13:09:36.154452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.639 [2024-12-13 13:09:36.154612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb570 is same with the state(5) to be set 00:24:55.639 [2024-12-13 13:09:36.154649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.639 [2024-12-13 13:09:36.154656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.639 [2024-12-13 13:09:36.154664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7224 len:8 PRP1 0x0 PRP2 0x0 00:24:55.639 [2024-12-13 13:09:36.154672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.639 [2024-12-13 13:09:36.154724] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xefb570 was disconnected and freed. reset controller. 00:24:55.639 [2024-12-13 13:09:36.154964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.639 [2024-12-13 13:09:36.155043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd090 (9): Bad file descriptor 00:24:55.639 [2024-12-13 13:09:36.155173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.639 [2024-12-13 13:09:36.155231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.639 [2024-12-13 13:09:36.155249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd090 with addr=10.0.0.2, port=4420 00:24:55.639 [2024-12-13 13:09:36.155260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd090 is same with the state(5) to be set 00:24:55.639 [2024-12-13 13:09:36.155279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd090 (9): Bad file descriptor 00:24:55.639 [2024-12-13 13:09:36.155295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.639 [2024-12-13 13:09:36.155304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.639 [2024-12-13 13:09:36.155314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.639 [2024-12-13 13:09:36.155334] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.639 [2024-12-13 13:09:36.155345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.639 13:09:36 -- host/timeout.sh@101 -- # sleep 3 00:24:56.575 [2024-12-13 13:09:37.155427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.575 [2024-12-13 13:09:37.155979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.575 [2024-12-13 13:09:37.156112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd090 with addr=10.0.0.2, port=4420 00:24:56.575 [2024-12-13 13:09:37.156211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd090 is same with the state(5) to be set 00:24:56.575 [2024-12-13 13:09:37.156304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd090 (9): Bad file descriptor 00:24:56.575 [2024-12-13 13:09:37.156397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.575 [2024-12-13 13:09:37.156473] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.575 [2024-12-13 13:09:37.156543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.575 [2024-12-13 13:09:37.156653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.575 [2024-12-13 13:09:37.156734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-12-13 13:09:38.156923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-12-13 13:09:38.157306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-12-13 13:09:38.157415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd090 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-12-13 13:09:38.157502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd090 is same with the state(5) to be set 00:24:57.511 [2024-12-13 13:09:38.157584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd090 (9): Bad file descriptor 00:24:57.511 [2024-12-13 13:09:38.157663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-12-13 13:09:38.157728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-12-13 13:09:38.157817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-12-13 13:09:38.157908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-12-13 13:09:38.157984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.447 [2024-12-13 13:09:39.158332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.447 [2024-12-13 13:09:39.158720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.447 [2024-12-13 13:09:39.158841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd090 with addr=10.0.0.2, port=4420 00:24:58.447 [2024-12-13 13:09:39.158917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd090 is same with the state(5) to be set 00:24:58.447 [2024-12-13 13:09:39.159212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd090 (9): Bad file descriptor 00:24:58.447 [2024-12-13 13:09:39.159545] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.447 [2024-12-13 13:09:39.159649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.447 [2024-12-13 13:09:39.159722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.447 [2024-12-13 13:09:39.162106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.447 [2024-12-13 13:09:39.162219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.447 13:09:39 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.706 [2024-12-13 13:09:39.425556] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.706 13:09:39 -- host/timeout.sh@103 -- # wait 100435 00:24:59.641 [2024-12-13 13:09:40.179376] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:04.911 00:25:04.911 Latency(us) 00:25:04.911 [2024-12-13T13:09:45.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.911 [2024-12-13T13:09:45.687Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:04.911 Verification LBA range: start 0x0 length 0x4000 00:25:04.911 NVMe0n1 : 10.01 9190.10 35.90 6435.39 0.00 8174.05 551.10 3019898.88 00:25:04.911 [2024-12-13T13:09:45.687Z] =================================================================================================================== 00:25:04.911 [2024-12-13T13:09:45.687Z] Total : 9190.10 35.90 6435.39 0.00 8174.05 0.00 3019898.88 00:25:04.911 0 00:25:04.911 13:09:45 -- host/timeout.sh@105 -- # killprocess 100264 00:25:04.911 13:09:45 -- common/autotest_common.sh@936 -- # '[' -z 100264 ']' 00:25:04.911 13:09:45 -- common/autotest_common.sh@940 -- # kill -0 100264 00:25:04.911 13:09:45 -- common/autotest_common.sh@941 -- # uname 00:25:04.911 13:09:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:04.911 13:09:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100264 00:25:04.911 13:09:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:04.911 13:09:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:04.911 13:09:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100264' 00:25:04.911 killing process with pid 100264 00:25:04.912 Received shutdown signal, test time was about 10.000000 seconds 00:25:04.912 00:25:04.912 Latency(us) 00:25:04.912 [2024-12-13T13:09:45.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.912 [2024-12-13T13:09:45.688Z] =================================================================================================================== 00:25:04.912 [2024-12-13T13:09:45.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.912 13:09:45 -- common/autotest_common.sh@955 -- # kill 100264 00:25:04.912 13:09:45 -- common/autotest_common.sh@960 -- # wait 100264 00:25:04.912 13:09:45 -- host/timeout.sh@110 -- # bdevperf_pid=100556 00:25:04.912 13:09:45 -- host/timeout.sh@112 -- # waitforlisten 100556 /var/tmp/bdevperf.sock 00:25:04.912 13:09:45 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:04.912 13:09:45 -- common/autotest_common.sh@829 -- # '[' -z 100556 ']' 00:25:04.912 13:09:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.912 13:09:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.912 13:09:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.912 13:09:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.912 13:09:45 -- common/autotest_common.sh@10 -- # set +x 00:25:04.912 [2024-12-13 13:09:45.298998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:04.912 [2024-12-13 13:09:45.299145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100556 ] 00:25:04.912 [2024-12-13 13:09:45.429875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.912 [2024-12-13 13:09:45.495271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.479 13:09:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.479 13:09:46 -- common/autotest_common.sh@862 -- # return 0 00:25:05.479 13:09:46 -- host/timeout.sh@116 -- # dtrace_pid=100584 00:25:05.479 13:09:46 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100556 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:05.479 13:09:46 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:06.047 13:09:46 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:06.047 NVMe0n1 00:25:06.047 13:09:46 -- host/timeout.sh@124 -- # rpc_pid=100638 00:25:06.047 13:09:46 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.047 13:09:46 -- host/timeout.sh@125 -- # sleep 1 00:25:06.306 Running I/O for 10 seconds... 00:25:07.242 13:09:47 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.503 [2024-12-13 13:09:48.072182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.503 [2024-12-13 13:09:48.072324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.072994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.073002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.073010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.073018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.504 [2024-12-13 13:09:48.073026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659db0 is same with the state(5) to be set 00:25:07.505 [2024-12-13 13:09:48.073704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.073983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.073993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.505 [2024-12-13 13:09:48.074282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.505 [2024-12-13 13:09:48.074291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.074989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.074998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.075009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.075018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.075028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.075037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.075047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.075056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.075066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.075075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.075087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.075113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.075125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.506 [2024-12-13 13:09:48.075135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.506 [2024-12-13 13:09:48.075147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.507 [2024-12-13 13:09:48.075923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.507 [2024-12-13 13:09:48.075940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.075951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.075960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.075970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.075979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.075990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.075999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.508 [2024-12-13 13:09:48.076409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.508 [2024-12-13 13:09:48.076443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.508 [2024-12-13 13:09:48.076451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49200 len:8 PRP1 0x0 PRP2 0x0 00:25:07.508 [2024-12-13 13:09:48.076460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.508 [2024-12-13 13:09:48.076511] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde2d10 was disconnected and freed. reset controller. 00:25:07.508 [2024-12-13 13:09:48.076810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.508 [2024-12-13 13:09:48.076898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb10b0 (9): Bad file descriptor 00:25:07.508 [2024-12-13 13:09:48.077017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.508 [2024-12-13 13:09:48.077072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.508 [2024-12-13 13:09:48.077094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb10b0 with addr=10.0.0.2, port=4420 00:25:07.508 [2024-12-13 13:09:48.077104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb10b0 is same with the state(5) to be set 00:25:07.508 [2024-12-13 13:09:48.077122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb10b0 (9): Bad file descriptor 00:25:07.508 [2024-12-13 13:09:48.077138] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.508 [2024-12-13 13:09:48.077148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.508 [2024-12-13 13:09:48.077158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.508 [2024-12-13 13:09:48.077178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.508 [2024-12-13 13:09:48.077188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.508 13:09:48 -- host/timeout.sh@128 -- # wait 100638 00:25:09.438 [2024-12-13 13:09:50.077343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.438 [2024-12-13 13:09:50.077437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.438 [2024-12-13 13:09:50.077455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb10b0 with addr=10.0.0.2, port=4420 00:25:09.438 [2024-12-13 13:09:50.077467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb10b0 is same with the state(5) to be set 00:25:09.439 [2024-12-13 13:09:50.077489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb10b0 (9): Bad file descriptor 00:25:09.439 [2024-12-13 13:09:50.077520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.439 [2024-12-13 13:09:50.077545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.439 [2024-12-13 13:09:50.077571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.439 [2024-12-13 13:09:50.077613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.439 [2024-12-13 13:09:50.077624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.340 [2024-12-13 13:09:52.077739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.340 [2024-12-13 13:09:52.077848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.340 [2024-12-13 13:09:52.077866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb10b0 with addr=10.0.0.2, port=4420 00:25:11.340 [2024-12-13 13:09:52.077877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb10b0 is same with the state(5) to be set 00:25:11.340 [2024-12-13 13:09:52.077913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb10b0 (9): Bad file descriptor 00:25:11.340 [2024-12-13 13:09:52.077950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.340 [2024-12-13 13:09:52.077965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.340 [2024-12-13 13:09:52.077984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.340 [2024-12-13 13:09:52.078007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.340 [2024-12-13 13:09:52.078018] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.871 [2024-12-13 13:09:54.078074] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.871 [2024-12-13 13:09:54.078110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.871 [2024-12-13 13:09:54.078137] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.871 [2024-12-13 13:09:54.078147] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:13.871 [2024-12-13 13:09:54.078183] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.447 00:25:14.447 Latency(us) 00:25:14.447 [2024-12-13T13:09:55.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.447 [2024-12-13T13:09:55.223Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:14.447 NVMe0n1 : 8.17 3202.48 12.51 15.67 0.00 39741.70 2829.96 7015926.69 00:25:14.447 [2024-12-13T13:09:55.223Z] =================================================================================================================== 00:25:14.447 [2024-12-13T13:09:55.223Z] Total : 3202.48 12.51 15.67 0.00 39741.70 2829.96 7015926.69 00:25:14.447 0 00:25:14.447 13:09:55 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:14.447 Attaching 5 probes... 00:25:14.447 1313.174053: reset bdev controller NVMe0 00:25:14.447 1313.321172: reconnect bdev controller NVMe0 00:25:14.447 3313.587177: reconnect delay bdev controller NVMe0 00:25:14.447 3313.606393: reconnect bdev controller NVMe0 00:25:14.447 5314.016655: reconnect delay bdev controller NVMe0 00:25:14.447 5314.036680: reconnect bdev controller NVMe0 00:25:14.447 7314.410689: reconnect delay bdev controller NVMe0 00:25:14.447 7314.426323: reconnect bdev controller NVMe0 00:25:14.447 13:09:55 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:14.447 13:09:55 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:14.447 13:09:55 -- host/timeout.sh@136 -- # kill 100584 00:25:14.447 13:09:55 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:14.447 13:09:55 -- host/timeout.sh@139 -- # killprocess 100556 00:25:14.447 13:09:55 -- common/autotest_common.sh@936 -- # '[' -z 100556 ']' 00:25:14.447 13:09:55 -- common/autotest_common.sh@940 -- # kill -0 100556 00:25:14.447 13:09:55 -- common/autotest_common.sh@941 -- # uname 00:25:14.447 13:09:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:14.447 13:09:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100556 00:25:14.447 killing process with pid 100556 00:25:14.447 Received shutdown signal, test time was about 8.236131 seconds 00:25:14.447 00:25:14.447 Latency(us) 00:25:14.447 [2024-12-13T13:09:55.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.447 [2024-12-13T13:09:55.223Z] =================================================================================================================== 00:25:14.448 [2024-12-13T13:09:55.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.448 13:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:14.448 13:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:14.448 13:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100556' 00:25:14.448 13:09:55 -- common/autotest_common.sh@955 -- # kill 100556 00:25:14.448 13:09:55 -- common/autotest_common.sh@960 -- # wait 100556 00:25:14.713 13:09:55 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:14.972 13:09:55 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:14.972 13:09:55 -- host/timeout.sh@145 -- # nvmftestfini 00:25:14.972 13:09:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:14.972 13:09:55 -- nvmf/common.sh@116 -- # sync 00:25:14.972 13:09:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:14.972 13:09:55 -- nvmf/common.sh@119 -- # set +e 00:25:14.972 13:09:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:14.972 13:09:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:14.972 rmmod nvme_tcp 00:25:14.972 rmmod nvme_fabrics 00:25:14.972 rmmod nvme_keyring 00:25:14.972 13:09:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:14.972 13:09:55 -- nvmf/common.sh@123 -- # set -e 00:25:14.972 13:09:55 -- nvmf/common.sh@124 -- # return 0 00:25:14.972 13:09:55 -- nvmf/common.sh@477 -- # '[' -n 99971 ']' 00:25:14.972 13:09:55 -- nvmf/common.sh@478 -- # killprocess 99971 00:25:14.972 13:09:55 -- common/autotest_common.sh@936 -- # '[' -z 99971 ']' 00:25:14.972 13:09:55 -- common/autotest_common.sh@940 -- # kill -0 99971 00:25:14.972 13:09:55 -- common/autotest_common.sh@941 -- # uname 00:25:14.972 13:09:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:14.972 13:09:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99971 00:25:14.972 killing process with pid 99971 00:25:14.972 13:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:14.972 13:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:14.972 13:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99971' 00:25:14.972 13:09:55 -- common/autotest_common.sh@955 -- # kill 99971 00:25:14.972 13:09:55 -- common/autotest_common.sh@960 -- # wait 99971 00:25:15.231 13:09:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:15.231 13:09:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:15.231 13:09:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:15.231 13:09:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.231 13:09:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:15.231 13:09:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.231 13:09:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.231 13:09:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.231 13:09:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:15.231 00:25:15.231 real 0m47.219s 00:25:15.231 user 2m18.941s 00:25:15.231 sys 0m5.020s 00:25:15.231 13:09:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:15.231 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:25:15.231 ************************************ 00:25:15.231 END TEST nvmf_timeout 00:25:15.231 ************************************ 00:25:15.231 13:09:56 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:15.231 13:09:56 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:15.231 13:09:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:15.231 13:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:15.489 13:09:56 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:15.489 00:25:15.489 real 17m25.627s 00:25:15.489 user 55m35.347s 00:25:15.489 sys 3m51.474s 00:25:15.489 13:09:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:15.489 13:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:15.489 ************************************ 00:25:15.489 END TEST nvmf_tcp 00:25:15.489 ************************************ 00:25:15.489 13:09:56 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:15.489 13:09:56 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:15.489 13:09:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:15.489 13:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:15.489 13:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:15.489 ************************************ 00:25:15.489 START TEST spdkcli_nvmf_tcp 00:25:15.489 ************************************ 00:25:15.489 13:09:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:15.489 * Looking for test storage... 00:25:15.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:15.489 13:09:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:15.489 13:09:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:15.489 13:09:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:15.489 13:09:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:15.489 13:09:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:15.489 13:09:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:15.489 13:09:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:15.490 13:09:56 -- scripts/common.sh@335 -- # IFS=.-: 00:25:15.490 13:09:56 -- scripts/common.sh@335 -- # read -ra ver1 00:25:15.490 13:09:56 -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.490 13:09:56 -- scripts/common.sh@336 -- # read -ra ver2 00:25:15.490 13:09:56 -- scripts/common.sh@337 -- # local 'op=<' 00:25:15.490 13:09:56 -- scripts/common.sh@339 -- # ver1_l=2 00:25:15.490 13:09:56 -- scripts/common.sh@340 -- # ver2_l=1 00:25:15.490 13:09:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:15.490 13:09:56 -- scripts/common.sh@343 -- # case "$op" in 00:25:15.490 13:09:56 -- scripts/common.sh@344 -- # : 1 00:25:15.490 13:09:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:15.490 13:09:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.490 13:09:56 -- scripts/common.sh@364 -- # decimal 1 00:25:15.490 13:09:56 -- scripts/common.sh@352 -- # local d=1 00:25:15.490 13:09:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.490 13:09:56 -- scripts/common.sh@354 -- # echo 1 00:25:15.490 13:09:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:15.490 13:09:56 -- scripts/common.sh@365 -- # decimal 2 00:25:15.490 13:09:56 -- scripts/common.sh@352 -- # local d=2 00:25:15.490 13:09:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.490 13:09:56 -- scripts/common.sh@354 -- # echo 2 00:25:15.490 13:09:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:15.490 13:09:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:15.490 13:09:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:15.490 13:09:56 -- scripts/common.sh@367 -- # return 0 00:25:15.490 13:09:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.490 13:09:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:15.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.490 --rc genhtml_branch_coverage=1 00:25:15.490 --rc genhtml_function_coverage=1 00:25:15.490 --rc genhtml_legend=1 00:25:15.490 --rc geninfo_all_blocks=1 00:25:15.490 --rc geninfo_unexecuted_blocks=1 00:25:15.490 00:25:15.490 ' 00:25:15.490 13:09:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:15.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.490 --rc genhtml_branch_coverage=1 00:25:15.490 --rc genhtml_function_coverage=1 00:25:15.490 --rc genhtml_legend=1 00:25:15.490 --rc geninfo_all_blocks=1 00:25:15.490 --rc geninfo_unexecuted_blocks=1 00:25:15.490 00:25:15.490 ' 00:25:15.490 13:09:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:15.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.490 --rc genhtml_branch_coverage=1 00:25:15.490 --rc genhtml_function_coverage=1 00:25:15.490 --rc genhtml_legend=1 00:25:15.490 --rc geninfo_all_blocks=1 00:25:15.490 --rc geninfo_unexecuted_blocks=1 00:25:15.490 00:25:15.490 ' 00:25:15.490 13:09:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:15.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.490 --rc genhtml_branch_coverage=1 00:25:15.490 --rc genhtml_function_coverage=1 00:25:15.490 --rc genhtml_legend=1 00:25:15.490 --rc geninfo_all_blocks=1 00:25:15.490 --rc geninfo_unexecuted_blocks=1 00:25:15.490 00:25:15.490 ' 00:25:15.490 13:09:56 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:15.490 13:09:56 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:15.490 13:09:56 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:15.490 13:09:56 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:15.490 13:09:56 -- nvmf/common.sh@7 -- # uname -s 00:25:15.749 13:09:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.749 13:09:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.749 13:09:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.749 13:09:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.749 13:09:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.749 13:09:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.749 13:09:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.749 13:09:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.749 13:09:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.749 13:09:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.749 13:09:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:25:15.749 13:09:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:25:15.749 13:09:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.749 13:09:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.749 13:09:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:15.749 13:09:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:15.749 13:09:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.749 13:09:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.749 13:09:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.749 13:09:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.749 13:09:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.749 13:09:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.749 13:09:56 -- paths/export.sh@5 -- # export PATH 00:25:15.749 13:09:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.749 13:09:56 -- nvmf/common.sh@46 -- # : 0 00:25:15.749 13:09:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:15.749 13:09:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:15.749 13:09:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:15.749 13:09:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.749 13:09:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.749 13:09:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:15.749 13:09:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:15.749 13:09:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:15.749 13:09:56 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:15.749 13:09:56 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:15.749 13:09:56 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:15.749 13:09:56 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:15.749 13:09:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:15.749 13:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:15.749 13:09:56 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:15.749 13:09:56 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=100863 00:25:15.749 13:09:56 -- spdkcli/common.sh@34 -- # waitforlisten 100863 00:25:15.749 13:09:56 -- common/autotest_common.sh@829 -- # '[' -z 100863 ']' 00:25:15.749 13:09:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.749 13:09:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.750 13:09:56 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:15.750 13:09:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.750 13:09:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.750 13:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:15.750 [2024-12-13 13:09:56.339038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:15.750 [2024-12-13 13:09:56.339153] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100863 ] 00:25:15.750 [2024-12-13 13:09:56.472072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:16.009 [2024-12-13 13:09:56.531317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:16.009 [2024-12-13 13:09:56.531628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.009 [2024-12-13 13:09:56.531649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.575 13:09:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.575 13:09:57 -- common/autotest_common.sh@862 -- # return 0 00:25:16.575 13:09:57 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:16.575 13:09:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:16.575 13:09:57 -- common/autotest_common.sh@10 -- # set +x 00:25:16.834 13:09:57 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:16.834 13:09:57 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:16.834 13:09:57 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:16.834 13:09:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:16.834 13:09:57 -- common/autotest_common.sh@10 -- # set +x 00:25:16.834 13:09:57 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:16.834 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:16.834 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:16.834 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:16.834 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:16.834 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:16.834 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:16.834 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:16.834 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:16.834 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:16.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:16.834 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:16.834 ' 00:25:17.093 [2024-12-13 13:09:57.858268] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:19.626 [2024-12-13 13:10:00.124495] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.002 [2024-12-13 13:10:01.409499] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:23.535 [2024-12-13 13:10:03.795163] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:25.442 [2024-12-13 13:10:05.848590] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:26.817 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:26.817 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:26.817 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:26.818 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:26.818 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:26.818 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:26.818 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:26.818 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:26.818 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:26.818 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:26.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:26.818 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:26.818 13:10:07 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:26.818 13:10:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:26.818 13:10:07 -- common/autotest_common.sh@10 -- # set +x 00:25:26.818 13:10:07 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:26.818 13:10:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:26.818 13:10:07 -- common/autotest_common.sh@10 -- # set +x 00:25:26.818 13:10:07 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:26.818 13:10:07 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:27.385 13:10:08 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:27.385 13:10:08 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:27.385 13:10:08 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:27.385 13:10:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.385 13:10:08 -- common/autotest_common.sh@10 -- # set +x 00:25:27.385 13:10:08 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:27.385 13:10:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:27.385 13:10:08 -- common/autotest_common.sh@10 -- # set +x 00:25:27.385 13:10:08 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:27.385 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:27.385 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:27.385 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:27.385 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:27.385 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:27.385 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:27.385 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:27.385 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:27.385 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:27.385 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:27.385 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:27.385 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:27.385 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:27.385 ' 00:25:33.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:33.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:33.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:33.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:33.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:33.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:33.970 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:33.970 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:33.970 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:33.970 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:33.970 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:33.970 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:33.970 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:33.970 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:33.970 13:10:13 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:33.970 13:10:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.970 13:10:13 -- common/autotest_common.sh@10 -- # set +x 00:25:33.970 13:10:13 -- spdkcli/nvmf.sh@90 -- # killprocess 100863 00:25:33.970 13:10:13 -- common/autotest_common.sh@936 -- # '[' -z 100863 ']' 00:25:33.970 13:10:13 -- common/autotest_common.sh@940 -- # kill -0 100863 00:25:33.970 13:10:13 -- common/autotest_common.sh@941 -- # uname 00:25:33.970 13:10:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:33.970 13:10:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100863 00:25:33.970 13:10:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:33.970 killing process with pid 100863 00:25:33.970 13:10:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:33.970 13:10:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100863' 00:25:33.970 13:10:13 -- common/autotest_common.sh@955 -- # kill 100863 00:25:33.970 [2024-12-13 13:10:13.664012] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:33.970 13:10:13 -- common/autotest_common.sh@960 -- # wait 100863 00:25:33.970 13:10:13 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:33.970 13:10:13 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:33.970 13:10:13 -- spdkcli/common.sh@13 -- # '[' -n 100863 ']' 00:25:33.970 13:10:13 -- spdkcli/common.sh@14 -- # killprocess 100863 00:25:33.970 13:10:13 -- common/autotest_common.sh@936 -- # '[' -z 100863 ']' 00:25:33.970 13:10:13 -- common/autotest_common.sh@940 -- # kill -0 100863 00:25:33.970 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (100863) - No such process 00:25:33.970 Process with pid 100863 is not found 00:25:33.970 13:10:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 100863 is not found' 00:25:33.970 13:10:13 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:33.970 13:10:13 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:33.970 13:10:13 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:33.970 00:25:33.970 real 0m17.769s 00:25:33.970 user 0m38.628s 00:25:33.970 sys 0m0.892s 00:25:33.970 13:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:33.970 13:10:13 -- common/autotest_common.sh@10 -- # set +x 00:25:33.970 ************************************ 00:25:33.970 END TEST spdkcli_nvmf_tcp 00:25:33.970 ************************************ 00:25:33.970 13:10:13 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:33.970 13:10:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:33.970 13:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.970 13:10:13 -- common/autotest_common.sh@10 -- # set +x 00:25:33.970 ************************************ 00:25:33.970 START TEST nvmf_identify_passthru 00:25:33.970 ************************************ 00:25:33.970 13:10:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:33.970 * Looking for test storage... 00:25:33.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:33.970 13:10:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:33.970 13:10:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:33.970 13:10:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:33.970 13:10:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:33.970 13:10:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:33.970 13:10:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:33.970 13:10:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:33.970 13:10:14 -- scripts/common.sh@335 -- # IFS=.-: 00:25:33.970 13:10:14 -- scripts/common.sh@335 -- # read -ra ver1 00:25:33.970 13:10:14 -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.970 13:10:14 -- scripts/common.sh@336 -- # read -ra ver2 00:25:33.970 13:10:14 -- scripts/common.sh@337 -- # local 'op=<' 00:25:33.970 13:10:14 -- scripts/common.sh@339 -- # ver1_l=2 00:25:33.970 13:10:14 -- scripts/common.sh@340 -- # ver2_l=1 00:25:33.970 13:10:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:33.970 13:10:14 -- scripts/common.sh@343 -- # case "$op" in 00:25:33.970 13:10:14 -- scripts/common.sh@344 -- # : 1 00:25:33.970 13:10:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:33.970 13:10:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.970 13:10:14 -- scripts/common.sh@364 -- # decimal 1 00:25:33.970 13:10:14 -- scripts/common.sh@352 -- # local d=1 00:25:33.970 13:10:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.970 13:10:14 -- scripts/common.sh@354 -- # echo 1 00:25:33.970 13:10:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:33.970 13:10:14 -- scripts/common.sh@365 -- # decimal 2 00:25:33.970 13:10:14 -- scripts/common.sh@352 -- # local d=2 00:25:33.970 13:10:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.970 13:10:14 -- scripts/common.sh@354 -- # echo 2 00:25:33.970 13:10:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:33.970 13:10:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:33.970 13:10:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:33.970 13:10:14 -- scripts/common.sh@367 -- # return 0 00:25:33.970 13:10:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.970 13:10:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:33.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.970 --rc genhtml_branch_coverage=1 00:25:33.970 --rc genhtml_function_coverage=1 00:25:33.970 --rc genhtml_legend=1 00:25:33.970 --rc geninfo_all_blocks=1 00:25:33.970 --rc geninfo_unexecuted_blocks=1 00:25:33.970 00:25:33.970 ' 00:25:33.970 13:10:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:33.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.970 --rc genhtml_branch_coverage=1 00:25:33.970 --rc genhtml_function_coverage=1 00:25:33.970 --rc genhtml_legend=1 00:25:33.970 --rc geninfo_all_blocks=1 00:25:33.970 --rc geninfo_unexecuted_blocks=1 00:25:33.970 00:25:33.970 ' 00:25:33.970 13:10:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:33.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.970 --rc genhtml_branch_coverage=1 00:25:33.970 --rc genhtml_function_coverage=1 00:25:33.970 --rc genhtml_legend=1 00:25:33.970 --rc geninfo_all_blocks=1 00:25:33.970 --rc geninfo_unexecuted_blocks=1 00:25:33.970 00:25:33.970 ' 00:25:33.970 13:10:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:33.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.970 --rc genhtml_branch_coverage=1 00:25:33.970 --rc genhtml_function_coverage=1 00:25:33.970 --rc genhtml_legend=1 00:25:33.970 --rc geninfo_all_blocks=1 00:25:33.970 --rc geninfo_unexecuted_blocks=1 00:25:33.970 00:25:33.970 ' 00:25:33.970 13:10:14 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:33.970 13:10:14 -- nvmf/common.sh@7 -- # uname -s 00:25:33.970 13:10:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.970 13:10:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.970 13:10:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.970 13:10:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.970 13:10:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.970 13:10:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.970 13:10:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.970 13:10:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.970 13:10:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.970 13:10:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.970 13:10:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:25:33.970 13:10:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:25:33.970 13:10:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.970 13:10:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.970 13:10:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:33.970 13:10:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.970 13:10:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.970 13:10:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.970 13:10:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.971 13:10:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.971 13:10:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.971 13:10:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.971 13:10:14 -- paths/export.sh@5 -- # export PATH 00:25:33.971 13:10:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.971 13:10:14 -- nvmf/common.sh@46 -- # : 0 00:25:33.971 13:10:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:33.971 13:10:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:33.971 13:10:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:33.971 13:10:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.971 13:10:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.971 13:10:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:33.971 13:10:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:33.971 13:10:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:33.971 13:10:14 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.971 13:10:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.971 13:10:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.971 13:10:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.971 13:10:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.971 13:10:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.971 13:10:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.971 13:10:14 -- paths/export.sh@5 -- # export PATH 00:25:33.971 13:10:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.971 13:10:14 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:33.971 13:10:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:33.971 13:10:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.971 13:10:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:33.971 13:10:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:33.971 13:10:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:33.971 13:10:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.971 13:10:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:33.971 13:10:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.971 13:10:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:33.971 13:10:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:33.971 13:10:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:33.971 13:10:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:33.971 13:10:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:33.971 13:10:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:33.971 13:10:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.971 13:10:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.971 13:10:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:33.971 13:10:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:33.971 13:10:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:33.971 13:10:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:33.971 13:10:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:33.971 13:10:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.971 13:10:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:33.971 13:10:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:33.971 13:10:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:33.971 13:10:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:33.971 13:10:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:33.971 13:10:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:33.971 Cannot find device "nvmf_tgt_br" 00:25:33.971 13:10:14 -- nvmf/common.sh@154 -- # true 00:25:33.971 13:10:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:33.971 Cannot find device "nvmf_tgt_br2" 00:25:33.971 13:10:14 -- nvmf/common.sh@155 -- # true 00:25:33.971 13:10:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:33.971 13:10:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:33.971 Cannot find device "nvmf_tgt_br" 00:25:33.971 13:10:14 -- nvmf/common.sh@157 -- # true 00:25:33.971 13:10:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:33.971 Cannot find device "nvmf_tgt_br2" 00:25:33.971 13:10:14 -- nvmf/common.sh@158 -- # true 00:25:33.971 13:10:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:33.971 13:10:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:33.971 13:10:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:33.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.971 13:10:14 -- nvmf/common.sh@161 -- # true 00:25:33.971 13:10:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:33.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.971 13:10:14 -- nvmf/common.sh@162 -- # true 00:25:33.971 13:10:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:33.971 13:10:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:33.971 13:10:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:33.971 13:10:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:33.971 13:10:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:33.971 13:10:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:33.971 13:10:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:33.971 13:10:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:33.971 13:10:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:33.971 13:10:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:33.971 13:10:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:33.971 13:10:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:33.971 13:10:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:33.971 13:10:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:33.971 13:10:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:33.971 13:10:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:33.971 13:10:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:33.971 13:10:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:33.971 13:10:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:33.971 13:10:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:33.971 13:10:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:33.971 13:10:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:33.971 13:10:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:33.971 13:10:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:33.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:25:33.971 00:25:33.971 --- 10.0.0.2 ping statistics --- 00:25:33.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.971 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:25:33.971 13:10:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:33.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:33.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:25:33.971 00:25:33.971 --- 10.0.0.3 ping statistics --- 00:25:33.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.971 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:33.971 13:10:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:33.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:25:33.971 00:25:33.971 --- 10.0.0.1 ping statistics --- 00:25:33.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.971 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:25:33.971 13:10:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.971 13:10:14 -- nvmf/common.sh@421 -- # return 0 00:25:33.971 13:10:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:33.971 13:10:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.971 13:10:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:33.971 13:10:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:33.971 13:10:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.971 13:10:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:33.971 13:10:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:33.971 13:10:14 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:33.971 13:10:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.971 13:10:14 -- common/autotest_common.sh@10 -- # set +x 00:25:33.971 13:10:14 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:33.972 13:10:14 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:33.972 13:10:14 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:33.972 13:10:14 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:33.972 13:10:14 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:33.972 13:10:14 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:33.972 13:10:14 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:33.972 13:10:14 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:33.972 13:10:14 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:33.972 13:10:14 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:33.972 13:10:14 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:33.972 13:10:14 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:33.972 13:10:14 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:33.972 13:10:14 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:33.972 13:10:14 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:33.972 13:10:14 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:33.972 13:10:14 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:33.972 13:10:14 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:33.972 13:10:14 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:33.972 13:10:14 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:33.972 13:10:14 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:33.972 13:10:14 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:34.231 13:10:14 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:34.231 13:10:14 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:34.231 13:10:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:34.231 13:10:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.231 13:10:14 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:34.231 13:10:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.231 13:10:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.231 13:10:14 -- target/identify_passthru.sh@31 -- # nvmfpid=101369 00:25:34.231 13:10:14 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:34.231 13:10:14 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.231 13:10:14 -- target/identify_passthru.sh@35 -- # waitforlisten 101369 00:25:34.231 13:10:14 -- common/autotest_common.sh@829 -- # '[' -z 101369 ']' 00:25:34.231 13:10:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.231 13:10:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.231 13:10:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.231 13:10:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.231 13:10:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.231 [2024-12-13 13:10:14.910014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:34.231 [2024-12-13 13:10:14.910096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.491 [2024-12-13 13:10:15.047824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.491 [2024-12-13 13:10:15.118088] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:34.491 [2024-12-13 13:10:15.118290] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.491 [2024-12-13 13:10:15.118306] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.491 [2024-12-13 13:10:15.118322] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.491 [2024-12-13 13:10:15.118474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.491 [2024-12-13 13:10:15.118821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.491 [2024-12-13 13:10:15.118983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.491 [2024-12-13 13:10:15.118991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.491 13:10:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:34.491 13:10:15 -- common/autotest_common.sh@862 -- # return 0 00:25:34.491 13:10:15 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:34.491 13:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.491 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:34.491 13:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.491 13:10:15 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:34.491 13:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.491 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:34.749 [2024-12-13 13:10:15.290385] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:34.749 13:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.749 13:10:15 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:34.749 13:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.749 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:34.749 [2024-12-13 13:10:15.304727] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.749 13:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.749 13:10:15 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:34.749 13:10:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:34.749 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:34.749 13:10:15 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:34.749 13:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.749 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:34.749 Nvme0n1 00:25:34.749 13:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.749 13:10:15 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:34.749 13:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.749 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:34.749 13:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.749 13:10:15 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:34.749 13:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.749 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:34.749 13:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.749 13:10:15 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.749 13:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.749 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:34.749 [2024-12-13 13:10:15.442388] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.749 13:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.749 13:10:15 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:34.749 13:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.749 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:34.749 [2024-12-13 13:10:15.450150] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:34.749 [ 00:25:34.749 { 00:25:34.749 "allow_any_host": true, 00:25:34.749 "hosts": [], 00:25:34.749 "listen_addresses": [], 00:25:34.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:34.749 "subtype": "Discovery" 00:25:34.749 }, 00:25:34.749 { 00:25:34.749 "allow_any_host": true, 00:25:34.749 "hosts": [], 00:25:34.749 "listen_addresses": [ 00:25:34.749 { 00:25:34.749 "adrfam": "IPv4", 00:25:34.749 "traddr": "10.0.0.2", 00:25:34.749 "transport": "TCP", 00:25:34.749 "trsvcid": "4420", 00:25:34.749 "trtype": "TCP" 00:25:34.749 } 00:25:34.749 ], 00:25:34.749 "max_cntlid": 65519, 00:25:34.749 "max_namespaces": 1, 00:25:34.749 "min_cntlid": 1, 00:25:34.749 "model_number": "SPDK bdev Controller", 00:25:34.749 "namespaces": [ 00:25:34.749 { 00:25:34.749 "bdev_name": "Nvme0n1", 00:25:34.749 "name": "Nvme0n1", 00:25:34.749 "nguid": "FBED85D616DF4B6A85DBC2AF026C46C0", 00:25:34.749 "nsid": 1, 00:25:34.749 "uuid": "fbed85d6-16df-4b6a-85db-c2af026c46c0" 00:25:34.749 } 00:25:34.749 ], 00:25:34.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.749 "serial_number": "SPDK00000000000001", 00:25:34.749 "subtype": "NVMe" 00:25:34.749 } 00:25:34.749 ] 00:25:34.749 13:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.749 13:10:15 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:34.749 13:10:15 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:34.749 13:10:15 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:35.007 13:10:15 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:35.007 13:10:15 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:35.007 13:10:15 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:35.007 13:10:15 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:35.266 13:10:15 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:35.266 13:10:15 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:35.266 13:10:15 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:35.266 13:10:15 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.266 13:10:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.266 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.266 13:10:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.266 13:10:15 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:35.266 13:10:15 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:35.266 13:10:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:35.266 13:10:15 -- nvmf/common.sh@116 -- # sync 00:25:35.266 13:10:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:35.266 13:10:15 -- nvmf/common.sh@119 -- # set +e 00:25:35.266 13:10:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:35.266 13:10:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:35.266 rmmod nvme_tcp 00:25:35.266 rmmod nvme_fabrics 00:25:35.266 rmmod nvme_keyring 00:25:35.266 13:10:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:35.266 13:10:15 -- nvmf/common.sh@123 -- # set -e 00:25:35.266 13:10:15 -- nvmf/common.sh@124 -- # return 0 00:25:35.266 13:10:15 -- nvmf/common.sh@477 -- # '[' -n 101369 ']' 00:25:35.266 13:10:15 -- nvmf/common.sh@478 -- # killprocess 101369 00:25:35.266 13:10:15 -- common/autotest_common.sh@936 -- # '[' -z 101369 ']' 00:25:35.266 13:10:15 -- common/autotest_common.sh@940 -- # kill -0 101369 00:25:35.266 13:10:15 -- common/autotest_common.sh@941 -- # uname 00:25:35.266 13:10:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:35.266 13:10:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101369 00:25:35.266 13:10:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:35.266 13:10:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:35.266 killing process with pid 101369 00:25:35.266 13:10:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101369' 00:25:35.266 13:10:16 -- common/autotest_common.sh@955 -- # kill 101369 00:25:35.266 [2024-12-13 13:10:16.012603] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:35.266 13:10:16 -- common/autotest_common.sh@960 -- # wait 101369 00:25:35.525 13:10:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:35.525 13:10:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:35.525 13:10:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:35.525 13:10:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.525 13:10:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:35.525 13:10:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.525 13:10:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:35.525 13:10:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.525 13:10:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:35.525 00:25:35.525 real 0m2.353s 00:25:35.525 user 0m4.658s 00:25:35.525 sys 0m0.813s 00:25:35.525 13:10:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:35.525 13:10:16 -- common/autotest_common.sh@10 -- # set +x 00:25:35.525 ************************************ 00:25:35.525 END TEST nvmf_identify_passthru 00:25:35.525 ************************************ 00:25:35.784 13:10:16 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:35.784 13:10:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:35.784 13:10:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:35.784 13:10:16 -- common/autotest_common.sh@10 -- # set +x 00:25:35.784 ************************************ 00:25:35.784 START TEST nvmf_dif 00:25:35.784 ************************************ 00:25:35.784 13:10:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:35.784 * Looking for test storage... 00:25:35.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:35.784 13:10:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:35.784 13:10:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:35.784 13:10:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:35.784 13:10:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:35.784 13:10:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:35.784 13:10:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:35.784 13:10:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:35.784 13:10:16 -- scripts/common.sh@335 -- # IFS=.-: 00:25:35.784 13:10:16 -- scripts/common.sh@335 -- # read -ra ver1 00:25:35.784 13:10:16 -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.784 13:10:16 -- scripts/common.sh@336 -- # read -ra ver2 00:25:35.784 13:10:16 -- scripts/common.sh@337 -- # local 'op=<' 00:25:35.784 13:10:16 -- scripts/common.sh@339 -- # ver1_l=2 00:25:35.784 13:10:16 -- scripts/common.sh@340 -- # ver2_l=1 00:25:35.784 13:10:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:35.784 13:10:16 -- scripts/common.sh@343 -- # case "$op" in 00:25:35.784 13:10:16 -- scripts/common.sh@344 -- # : 1 00:25:35.784 13:10:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:35.784 13:10:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.784 13:10:16 -- scripts/common.sh@364 -- # decimal 1 00:25:35.784 13:10:16 -- scripts/common.sh@352 -- # local d=1 00:25:35.784 13:10:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.784 13:10:16 -- scripts/common.sh@354 -- # echo 1 00:25:35.784 13:10:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:35.785 13:10:16 -- scripts/common.sh@365 -- # decimal 2 00:25:35.785 13:10:16 -- scripts/common.sh@352 -- # local d=2 00:25:35.785 13:10:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.785 13:10:16 -- scripts/common.sh@354 -- # echo 2 00:25:35.785 13:10:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:35.785 13:10:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:35.785 13:10:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:35.785 13:10:16 -- scripts/common.sh@367 -- # return 0 00:25:35.785 13:10:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.785 13:10:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:35.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.785 --rc genhtml_branch_coverage=1 00:25:35.785 --rc genhtml_function_coverage=1 00:25:35.785 --rc genhtml_legend=1 00:25:35.785 --rc geninfo_all_blocks=1 00:25:35.785 --rc geninfo_unexecuted_blocks=1 00:25:35.785 00:25:35.785 ' 00:25:35.785 13:10:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:35.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.785 --rc genhtml_branch_coverage=1 00:25:35.785 --rc genhtml_function_coverage=1 00:25:35.785 --rc genhtml_legend=1 00:25:35.785 --rc geninfo_all_blocks=1 00:25:35.785 --rc geninfo_unexecuted_blocks=1 00:25:35.785 00:25:35.785 ' 00:25:35.785 13:10:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:35.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.785 --rc genhtml_branch_coverage=1 00:25:35.785 --rc genhtml_function_coverage=1 00:25:35.785 --rc genhtml_legend=1 00:25:35.785 --rc geninfo_all_blocks=1 00:25:35.785 --rc geninfo_unexecuted_blocks=1 00:25:35.785 00:25:35.785 ' 00:25:35.785 13:10:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:35.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.785 --rc genhtml_branch_coverage=1 00:25:35.785 --rc genhtml_function_coverage=1 00:25:35.785 --rc genhtml_legend=1 00:25:35.785 --rc geninfo_all_blocks=1 00:25:35.785 --rc geninfo_unexecuted_blocks=1 00:25:35.785 00:25:35.785 ' 00:25:35.785 13:10:16 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:35.785 13:10:16 -- nvmf/common.sh@7 -- # uname -s 00:25:35.785 13:10:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.785 13:10:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.785 13:10:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.785 13:10:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.785 13:10:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.785 13:10:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.785 13:10:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.785 13:10:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.785 13:10:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.785 13:10:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.785 13:10:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:25:35.785 13:10:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:25:35.785 13:10:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.785 13:10:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.785 13:10:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:35.785 13:10:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:35.785 13:10:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.785 13:10:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.785 13:10:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.785 13:10:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.785 13:10:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.785 13:10:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.785 13:10:16 -- paths/export.sh@5 -- # export PATH 00:25:35.785 13:10:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.785 13:10:16 -- nvmf/common.sh@46 -- # : 0 00:25:35.785 13:10:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:35.785 13:10:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:35.785 13:10:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:35.785 13:10:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.785 13:10:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.785 13:10:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:35.785 13:10:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:35.785 13:10:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:35.785 13:10:16 -- target/dif.sh@15 -- # NULL_META=16 00:25:35.785 13:10:16 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:35.785 13:10:16 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:35.785 13:10:16 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:35.785 13:10:16 -- target/dif.sh@135 -- # nvmftestinit 00:25:35.785 13:10:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:35.785 13:10:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.785 13:10:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:35.785 13:10:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:35.785 13:10:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:35.785 13:10:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.785 13:10:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:35.785 13:10:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.785 13:10:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:35.785 13:10:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:35.785 13:10:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:35.785 13:10:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:35.785 13:10:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:35.785 13:10:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:35.785 13:10:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.785 13:10:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.785 13:10:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:35.785 13:10:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:35.785 13:10:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:35.785 13:10:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:35.785 13:10:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:35.785 13:10:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.785 13:10:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:35.785 13:10:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:35.785 13:10:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:35.785 13:10:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:35.785 13:10:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:35.785 13:10:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:35.785 Cannot find device "nvmf_tgt_br" 00:25:35.785 13:10:16 -- nvmf/common.sh@154 -- # true 00:25:35.785 13:10:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:35.785 Cannot find device "nvmf_tgt_br2" 00:25:35.785 13:10:16 -- nvmf/common.sh@155 -- # true 00:25:35.785 13:10:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:35.785 13:10:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:35.785 Cannot find device "nvmf_tgt_br" 00:25:35.785 13:10:16 -- nvmf/common.sh@157 -- # true 00:25:35.785 13:10:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:35.785 Cannot find device "nvmf_tgt_br2" 00:25:36.044 13:10:16 -- nvmf/common.sh@158 -- # true 00:25:36.044 13:10:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:36.044 13:10:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:36.044 13:10:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:36.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.044 13:10:16 -- nvmf/common.sh@161 -- # true 00:25:36.044 13:10:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:36.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.044 13:10:16 -- nvmf/common.sh@162 -- # true 00:25:36.044 13:10:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:36.044 13:10:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:36.044 13:10:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:36.044 13:10:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:36.044 13:10:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:36.044 13:10:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:36.044 13:10:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:36.044 13:10:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:36.044 13:10:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:36.044 13:10:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:36.044 13:10:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:36.044 13:10:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:36.044 13:10:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:36.044 13:10:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:36.044 13:10:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:36.044 13:10:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:36.044 13:10:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:36.044 13:10:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:36.044 13:10:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:36.044 13:10:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:36.044 13:10:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:36.044 13:10:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:36.044 13:10:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:36.044 13:10:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:36.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:25:36.044 00:25:36.044 --- 10.0.0.2 ping statistics --- 00:25:36.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.045 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:25:36.045 13:10:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:36.303 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:36.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:25:36.303 00:25:36.303 --- 10.0.0.3 ping statistics --- 00:25:36.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.303 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:25:36.303 13:10:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:36.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:36.303 00:25:36.303 --- 10.0.0.1 ping statistics --- 00:25:36.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.303 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:36.303 13:10:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.303 13:10:16 -- nvmf/common.sh@421 -- # return 0 00:25:36.303 13:10:16 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:36.303 13:10:16 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:36.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:36.563 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:36.563 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:36.563 13:10:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.563 13:10:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:36.563 13:10:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:36.563 13:10:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.563 13:10:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:36.563 13:10:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:36.563 13:10:17 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:36.563 13:10:17 -- target/dif.sh@137 -- # nvmfappstart 00:25:36.563 13:10:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:36.563 13:10:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:36.563 13:10:17 -- common/autotest_common.sh@10 -- # set +x 00:25:36.563 13:10:17 -- nvmf/common.sh@469 -- # nvmfpid=101714 00:25:36.563 13:10:17 -- nvmf/common.sh@470 -- # waitforlisten 101714 00:25:36.563 13:10:17 -- common/autotest_common.sh@829 -- # '[' -z 101714 ']' 00:25:36.563 13:10:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:36.563 13:10:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.563 13:10:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.563 13:10:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.563 13:10:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.563 13:10:17 -- common/autotest_common.sh@10 -- # set +x 00:25:36.563 [2024-12-13 13:10:17.312601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:36.563 [2024-12-13 13:10:17.313285] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.822 [2024-12-13 13:10:17.447962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.822 [2024-12-13 13:10:17.517494] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:36.822 [2024-12-13 13:10:17.517635] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.822 [2024-12-13 13:10:17.517647] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.822 [2024-12-13 13:10:17.517655] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.822 [2024-12-13 13:10:17.517680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.758 13:10:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.758 13:10:18 -- common/autotest_common.sh@862 -- # return 0 00:25:37.758 13:10:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:37.758 13:10:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.758 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:25:37.758 13:10:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.758 13:10:18 -- target/dif.sh@139 -- # create_transport 00:25:37.758 13:10:18 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:37.758 13:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.758 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:25:37.758 [2024-12-13 13:10:18.381797] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.758 13:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.758 13:10:18 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:37.758 13:10:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:37.758 13:10:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:37.758 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:25:37.758 ************************************ 00:25:37.758 START TEST fio_dif_1_default 00:25:37.758 ************************************ 00:25:37.758 13:10:18 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:37.758 13:10:18 -- target/dif.sh@86 -- # create_subsystems 0 00:25:37.758 13:10:18 -- target/dif.sh@28 -- # local sub 00:25:37.758 13:10:18 -- target/dif.sh@30 -- # for sub in "$@" 00:25:37.758 13:10:18 -- target/dif.sh@31 -- # create_subsystem 0 00:25:37.758 13:10:18 -- target/dif.sh@18 -- # local sub_id=0 00:25:37.758 13:10:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:37.758 13:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.758 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:25:37.758 bdev_null0 00:25:37.758 13:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.758 13:10:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:37.758 13:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.758 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:25:37.758 13:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.758 13:10:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:37.758 13:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.758 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:25:37.758 13:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.758 13:10:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:37.758 13:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.758 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:25:37.758 [2024-12-13 13:10:18.433928] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.758 13:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.758 13:10:18 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:37.758 13:10:18 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:37.758 13:10:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:37.758 13:10:18 -- nvmf/common.sh@520 -- # config=() 00:25:37.758 13:10:18 -- nvmf/common.sh@520 -- # local subsystem config 00:25:37.758 13:10:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.758 13:10:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.758 { 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme$subsystem", 00:25:37.758 "trtype": "$TEST_TRANSPORT", 00:25:37.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "$NVMF_PORT", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.758 "hdgst": ${hdgst:-false}, 00:25:37.758 "ddgst": ${ddgst:-false} 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 } 00:25:37.758 EOF 00:25:37.758 )") 00:25:37.758 13:10:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.758 13:10:18 -- target/dif.sh@82 -- # gen_fio_conf 00:25:37.758 13:10:18 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.758 13:10:18 -- target/dif.sh@54 -- # local file 00:25:37.758 13:10:18 -- target/dif.sh@56 -- # cat 00:25:37.758 13:10:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:37.758 13:10:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:37.758 13:10:18 -- nvmf/common.sh@542 -- # cat 00:25:37.758 13:10:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:37.758 13:10:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:37.758 13:10:18 -- common/autotest_common.sh@1330 -- # shift 00:25:37.758 13:10:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:37.758 13:10:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:37.758 13:10:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:37.758 13:10:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:37.758 13:10:18 -- target/dif.sh@72 -- # (( file <= files )) 00:25:37.758 13:10:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:37.758 13:10:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:37.758 13:10:18 -- nvmf/common.sh@544 -- # jq . 00:25:37.758 13:10:18 -- nvmf/common.sh@545 -- # IFS=, 00:25:37.758 13:10:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme0", 00:25:37.758 "trtype": "tcp", 00:25:37.758 "traddr": "10.0.0.2", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 }' 00:25:37.758 13:10:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:37.758 13:10:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:37.758 13:10:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:37.758 13:10:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:37.758 13:10:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:37.758 13:10:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:37.758 13:10:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:37.758 13:10:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:37.758 13:10:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:37.758 13:10:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.017 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:38.017 fio-3.35 00:25:38.017 Starting 1 thread 00:25:38.276 [2024-12-13 13:10:19.043417] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:38.276 [2024-12-13 13:10:19.043545] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:50.489 00:25:50.489 filename0: (groupid=0, jobs=1): err= 0: pid=101798: Fri Dec 13 13:10:29 2024 00:25:50.489 read: IOPS=4108, BW=16.0MiB/s (16.8MB/s)(161MiB/10001msec) 00:25:50.489 slat (nsec): min=5753, max=71825, avg=6981.78, stdev=2646.27 00:25:50.489 clat (usec): min=344, max=42218, avg=952.82, stdev=4647.84 00:25:50.489 lat (usec): min=350, max=42226, avg=959.80, stdev=4647.95 00:25:50.489 clat percentiles (usec): 00:25:50.489 | 1.00th=[ 371], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 383], 00:25:50.489 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 400], 60.00th=[ 408], 00:25:50.489 | 70.00th=[ 416], 80.00th=[ 433], 90.00th=[ 478], 95.00th=[ 545], 00:25:50.489 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:50.489 | 99.99th=[41681] 00:25:50.489 bw ( KiB/s): min= 7520, max=25216, per=100.00%, avg=16855.58, stdev=4915.24, samples=19 00:25:50.489 iops : min= 1880, max= 6304, avg=4213.89, stdev=1228.81, samples=19 00:25:50.489 lat (usec) : 500=91.89%, 750=6.71%, 1000=0.04% 00:25:50.489 lat (msec) : 2=0.02%, 10=0.01%, 50=1.32% 00:25:50.489 cpu : usr=88.60%, sys=9.65%, ctx=24, majf=0, minf=8 00:25:50.489 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.489 issued rwts: total=41088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.489 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:50.489 00:25:50.489 Run status group 0 (all jobs): 00:25:50.489 READ: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=161MiB (168MB), run=10001-10001msec 00:25:50.489 13:10:29 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:50.489 13:10:29 -- target/dif.sh@43 -- # local sub 00:25:50.489 13:10:29 -- target/dif.sh@45 -- # for sub in "$@" 00:25:50.489 13:10:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:50.489 13:10:29 -- target/dif.sh@36 -- # local sub_id=0 00:25:50.489 13:10:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 13:10:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 00:25:50.489 real 0m10.976s 00:25:50.489 user 0m9.509s 00:25:50.489 sys 0m1.196s 00:25:50.489 13:10:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 ************************************ 00:25:50.489 END TEST fio_dif_1_default 00:25:50.489 ************************************ 00:25:50.489 13:10:29 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:50.489 13:10:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:50.489 13:10:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 ************************************ 00:25:50.489 START TEST fio_dif_1_multi_subsystems 00:25:50.489 ************************************ 00:25:50.489 13:10:29 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:50.489 13:10:29 -- target/dif.sh@92 -- # local files=1 00:25:50.489 13:10:29 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:50.489 13:10:29 -- target/dif.sh@28 -- # local sub 00:25:50.489 13:10:29 -- target/dif.sh@30 -- # for sub in "$@" 00:25:50.489 13:10:29 -- target/dif.sh@31 -- # create_subsystem 0 00:25:50.489 13:10:29 -- target/dif.sh@18 -- # local sub_id=0 00:25:50.489 13:10:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 bdev_null0 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 13:10:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 13:10:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 13:10:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 [2024-12-13 13:10:29.458831] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 13:10:29 -- target/dif.sh@30 -- # for sub in "$@" 00:25:50.489 13:10:29 -- target/dif.sh@31 -- # create_subsystem 1 00:25:50.489 13:10:29 -- target/dif.sh@18 -- # local sub_id=1 00:25:50.489 13:10:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 bdev_null1 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 13:10:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 13:10:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 13:10:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.489 13:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.489 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 13:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.489 13:10:29 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:50.489 13:10:29 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:50.489 13:10:29 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:50.489 13:10:29 -- nvmf/common.sh@520 -- # config=() 00:25:50.489 13:10:29 -- nvmf/common.sh@520 -- # local subsystem config 00:25:50.489 13:10:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.489 13:10:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.489 { 00:25:50.489 "params": { 00:25:50.489 "name": "Nvme$subsystem", 00:25:50.489 "trtype": "$TEST_TRANSPORT", 00:25:50.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.489 "adrfam": "ipv4", 00:25:50.489 "trsvcid": "$NVMF_PORT", 00:25:50.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.489 "hdgst": ${hdgst:-false}, 00:25:50.489 "ddgst": ${ddgst:-false} 00:25:50.489 }, 00:25:50.489 "method": "bdev_nvme_attach_controller" 00:25:50.489 } 00:25:50.489 EOF 00:25:50.489 )") 00:25:50.489 13:10:29 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:50.489 13:10:29 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:50.489 13:10:29 -- target/dif.sh@82 -- # gen_fio_conf 00:25:50.489 13:10:29 -- target/dif.sh@54 -- # local file 00:25:50.489 13:10:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:50.489 13:10:29 -- target/dif.sh@56 -- # cat 00:25:50.489 13:10:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:50.489 13:10:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:50.489 13:10:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:50.489 13:10:29 -- common/autotest_common.sh@1330 -- # shift 00:25:50.489 13:10:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:50.489 13:10:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:50.489 13:10:29 -- nvmf/common.sh@542 -- # cat 00:25:50.489 13:10:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:50.489 13:10:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:50.489 13:10:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:50.489 13:10:29 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:50.489 13:10:29 -- target/dif.sh@72 -- # (( file <= files )) 00:25:50.489 13:10:29 -- target/dif.sh@73 -- # cat 00:25:50.489 13:10:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.489 13:10:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.489 { 00:25:50.489 "params": { 00:25:50.489 "name": "Nvme$subsystem", 00:25:50.489 "trtype": "$TEST_TRANSPORT", 00:25:50.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.489 "adrfam": "ipv4", 00:25:50.489 "trsvcid": "$NVMF_PORT", 00:25:50.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.489 "hdgst": ${hdgst:-false}, 00:25:50.489 "ddgst": ${ddgst:-false} 00:25:50.489 }, 00:25:50.489 "method": "bdev_nvme_attach_controller" 00:25:50.489 } 00:25:50.489 EOF 00:25:50.489 )") 00:25:50.489 13:10:29 -- nvmf/common.sh@542 -- # cat 00:25:50.489 13:10:29 -- target/dif.sh@72 -- # (( file++ )) 00:25:50.489 13:10:29 -- target/dif.sh@72 -- # (( file <= files )) 00:25:50.489 13:10:29 -- nvmf/common.sh@544 -- # jq . 00:25:50.489 13:10:29 -- nvmf/common.sh@545 -- # IFS=, 00:25:50.489 13:10:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:50.489 "params": { 00:25:50.489 "name": "Nvme0", 00:25:50.489 "trtype": "tcp", 00:25:50.489 "traddr": "10.0.0.2", 00:25:50.489 "adrfam": "ipv4", 00:25:50.489 "trsvcid": "4420", 00:25:50.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:50.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:50.490 "hdgst": false, 00:25:50.490 "ddgst": false 00:25:50.490 }, 00:25:50.490 "method": "bdev_nvme_attach_controller" 00:25:50.490 },{ 00:25:50.490 "params": { 00:25:50.490 "name": "Nvme1", 00:25:50.490 "trtype": "tcp", 00:25:50.490 "traddr": "10.0.0.2", 00:25:50.490 "adrfam": "ipv4", 00:25:50.490 "trsvcid": "4420", 00:25:50.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:50.490 "hdgst": false, 00:25:50.490 "ddgst": false 00:25:50.490 }, 00:25:50.490 "method": "bdev_nvme_attach_controller" 00:25:50.490 }' 00:25:50.490 13:10:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:50.490 13:10:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:50.490 13:10:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:50.490 13:10:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:50.490 13:10:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:50.490 13:10:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:50.490 13:10:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:50.490 13:10:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:50.490 13:10:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:50.490 13:10:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:50.490 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:50.490 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:50.490 fio-3.35 00:25:50.490 Starting 2 threads 00:25:50.490 [2024-12-13 13:10:30.246388] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:50.490 [2024-12-13 13:10:30.246450] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:00.466 00:26:00.466 filename0: (groupid=0, jobs=1): err= 0: pid=101958: Fri Dec 13 13:10:40 2024 00:26:00.466 read: IOPS=1150, BW=4602KiB/s (4712kB/s)(45.1MiB/10041msec) 00:26:00.466 slat (nsec): min=5353, max=37439, avg=7451.39, stdev=2788.49 00:26:00.466 clat (usec): min=371, max=41682, avg=3454.28, stdev=10634.28 00:26:00.466 lat (usec): min=377, max=41695, avg=3461.73, stdev=10634.34 00:26:00.466 clat percentiles (usec): 00:26:00.466 | 1.00th=[ 379], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 400], 00:26:00.466 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 429], 00:26:00.466 | 70.00th=[ 441], 80.00th=[ 469], 90.00th=[ 701], 95.00th=[40633], 00:26:00.466 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:26:00.466 | 99.99th=[41681] 00:26:00.466 bw ( KiB/s): min= 2496, max= 7168, per=52.69%, avg=4619.20, stdev=1223.52, samples=20 00:26:00.466 iops : min= 624, max= 1792, avg=1154.80, stdev=305.88, samples=20 00:26:00.466 lat (usec) : 500=83.48%, 750=8.30%, 1000=0.74% 00:26:00.466 lat (msec) : 4=0.03%, 50=7.44% 00:26:00.466 cpu : usr=94.78%, sys=4.49%, ctx=14, majf=0, minf=9 00:26:00.466 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:00.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.466 issued rwts: total=11552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.466 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:00.466 filename1: (groupid=0, jobs=1): err= 0: pid=101959: Fri Dec 13 13:10:40 2024 00:26:00.466 read: IOPS=1045, BW=4180KiB/s (4281kB/s)(40.8MiB/10001msec) 00:26:00.466 slat (nsec): min=5783, max=44396, avg=7530.18, stdev=3131.73 00:26:00.466 clat (usec): min=371, max=42742, avg=3804.49, stdev=11187.14 00:26:00.466 lat (usec): min=377, max=42751, avg=3812.02, stdev=11187.34 00:26:00.466 clat percentiles (usec): 00:26:00.466 | 1.00th=[ 379], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 396], 00:26:00.466 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 424], 00:26:00.466 | 70.00th=[ 437], 80.00th=[ 469], 90.00th=[ 717], 95.00th=[40633], 00:26:00.466 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:26:00.466 | 99.99th=[42730] 00:26:00.466 bw ( KiB/s): min= 2368, max= 7136, per=47.91%, avg=4200.42, stdev=1180.67, samples=19 00:26:00.466 iops : min= 592, max= 1784, avg=1050.11, stdev=295.17, samples=19 00:26:00.466 lat (usec) : 500=83.44%, 750=7.43%, 1000=0.75% 00:26:00.466 lat (msec) : 2=0.04%, 4=0.04%, 50=8.30% 00:26:00.466 cpu : usr=94.31%, sys=4.99%, ctx=127, majf=0, minf=0 00:26:00.466 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:00.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.466 issued rwts: total=10452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.466 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:00.466 00:26:00.466 Run status group 0 (all jobs): 00:26:00.466 READ: bw=8766KiB/s (8976kB/s), 4180KiB/s-4602KiB/s (4281kB/s-4712kB/s), io=86.0MiB (90.1MB), run=10001-10041msec 00:26:00.466 13:10:40 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:00.466 13:10:40 -- target/dif.sh@43 -- # local sub 00:26:00.466 13:10:40 -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.466 13:10:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:00.466 13:10:40 -- target/dif.sh@36 -- # local sub_id=0 00:26:00.466 13:10:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:00.466 13:10:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.466 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.466 13:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.466 13:10:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:00.466 13:10:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.466 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.466 13:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.466 13:10:40 -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.466 13:10:40 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:00.466 13:10:40 -- target/dif.sh@36 -- # local sub_id=1 00:26:00.466 13:10:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.466 13:10:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.466 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.466 13:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.466 13:10:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:00.467 13:10:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.467 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.467 13:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.467 00:26:00.467 real 0m11.221s 00:26:00.467 user 0m19.750s 00:26:00.467 sys 0m1.237s 00:26:00.467 13:10:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:00.467 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.467 ************************************ 00:26:00.467 END TEST fio_dif_1_multi_subsystems 00:26:00.467 ************************************ 00:26:00.467 13:10:40 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:00.467 13:10:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:00.467 13:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:00.467 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.467 ************************************ 00:26:00.467 START TEST fio_dif_rand_params 00:26:00.467 ************************************ 00:26:00.467 13:10:40 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:00.467 13:10:40 -- target/dif.sh@100 -- # local NULL_DIF 00:26:00.467 13:10:40 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:00.467 13:10:40 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:00.467 13:10:40 -- target/dif.sh@103 -- # bs=128k 00:26:00.467 13:10:40 -- target/dif.sh@103 -- # numjobs=3 00:26:00.467 13:10:40 -- target/dif.sh@103 -- # iodepth=3 00:26:00.467 13:10:40 -- target/dif.sh@103 -- # runtime=5 00:26:00.467 13:10:40 -- target/dif.sh@105 -- # create_subsystems 0 00:26:00.467 13:10:40 -- target/dif.sh@28 -- # local sub 00:26:00.467 13:10:40 -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.467 13:10:40 -- target/dif.sh@31 -- # create_subsystem 0 00:26:00.467 13:10:40 -- target/dif.sh@18 -- # local sub_id=0 00:26:00.467 13:10:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:00.467 13:10:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.467 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.467 bdev_null0 00:26:00.467 13:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.467 13:10:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:00.467 13:10:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.467 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.467 13:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.467 13:10:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:00.467 13:10:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.467 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.467 13:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.467 13:10:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.467 13:10:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.467 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.467 [2024-12-13 13:10:40.733236] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.467 13:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.467 13:10:40 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:00.467 13:10:40 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:00.467 13:10:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:00.467 13:10:40 -- nvmf/common.sh@520 -- # config=() 00:26:00.467 13:10:40 -- nvmf/common.sh@520 -- # local subsystem config 00:26:00.467 13:10:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:00.467 13:10:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.467 13:10:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:00.467 { 00:26:00.467 "params": { 00:26:00.467 "name": "Nvme$subsystem", 00:26:00.467 "trtype": "$TEST_TRANSPORT", 00:26:00.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.467 "adrfam": "ipv4", 00:26:00.467 "trsvcid": "$NVMF_PORT", 00:26:00.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.467 "hdgst": ${hdgst:-false}, 00:26:00.467 "ddgst": ${ddgst:-false} 00:26:00.467 }, 00:26:00.467 "method": "bdev_nvme_attach_controller" 00:26:00.467 } 00:26:00.467 EOF 00:26:00.467 )") 00:26:00.467 13:10:40 -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.467 13:10:40 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.467 13:10:40 -- target/dif.sh@54 -- # local file 00:26:00.467 13:10:40 -- target/dif.sh@56 -- # cat 00:26:00.467 13:10:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:00.467 13:10:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.467 13:10:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:00.467 13:10:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.467 13:10:40 -- common/autotest_common.sh@1330 -- # shift 00:26:00.467 13:10:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:00.467 13:10:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.467 13:10:40 -- nvmf/common.sh@542 -- # cat 00:26:00.467 13:10:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:00.467 13:10:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.467 13:10:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:00.467 13:10:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.467 13:10:40 -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.467 13:10:40 -- nvmf/common.sh@544 -- # jq . 00:26:00.467 13:10:40 -- nvmf/common.sh@545 -- # IFS=, 00:26:00.467 13:10:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:00.467 "params": { 00:26:00.467 "name": "Nvme0", 00:26:00.467 "trtype": "tcp", 00:26:00.467 "traddr": "10.0.0.2", 00:26:00.467 "adrfam": "ipv4", 00:26:00.467 "trsvcid": "4420", 00:26:00.467 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.467 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.467 "hdgst": false, 00:26:00.467 "ddgst": false 00:26:00.467 }, 00:26:00.467 "method": "bdev_nvme_attach_controller" 00:26:00.467 }' 00:26:00.467 13:10:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:00.467 13:10:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:00.467 13:10:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.467 13:10:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.467 13:10:40 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:00.467 13:10:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:00.467 13:10:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:00.467 13:10:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:00.467 13:10:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:00.467 13:10:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.467 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:00.467 ... 00:26:00.467 fio-3.35 00:26:00.467 Starting 3 threads 00:26:00.726 [2024-12-13 13:10:41.334690] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:00.726 [2024-12-13 13:10:41.334783] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:05.995 00:26:05.995 filename0: (groupid=0, jobs=1): err= 0: pid=102115: Fri Dec 13 13:10:46 2024 00:26:05.995 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(168MiB/5008msec) 00:26:05.995 slat (nsec): min=5875, max=69462, avg=12217.41, stdev=5829.75 00:26:05.995 clat (usec): min=3940, max=52594, avg=11185.38, stdev=9405.33 00:26:05.995 lat (usec): min=3950, max=52605, avg=11197.59, stdev=9405.73 00:26:05.995 clat percentiles (usec): 00:26:05.995 | 1.00th=[ 5145], 5.00th=[ 6194], 10.00th=[ 6587], 20.00th=[ 7373], 00:26:05.995 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:26:05.995 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12256], 95.00th=[47449], 00:26:05.995 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52691], 00:26:05.995 | 99.99th=[52691] 00:26:05.995 bw ( KiB/s): min=22272, max=43008, per=35.14%, avg=34252.80, stdev=7017.91, samples=10 00:26:05.995 iops : min= 174, max= 336, avg=267.60, stdev=54.83, samples=10 00:26:05.995 lat (msec) : 4=0.07%, 10=68.98%, 20=25.58%, 50=2.39%, 100=2.98% 00:26:05.995 cpu : usr=92.63%, sys=5.71%, ctx=7, majf=0, minf=0 00:26:05.995 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.995 issued rwts: total=1341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.995 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.995 filename0: (groupid=0, jobs=1): err= 0: pid=102116: Fri Dec 13 13:10:46 2024 00:26:05.995 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(177MiB/5005msec) 00:26:05.995 slat (nsec): min=5745, max=71767, avg=10762.41, stdev=6336.67 00:26:05.995 clat (usec): min=3545, max=51295, avg=10564.60, stdev=4619.63 00:26:05.995 lat (usec): min=3551, max=51302, avg=10575.36, stdev=4620.20 00:26:05.995 clat percentiles (usec): 00:26:05.995 | 1.00th=[ 3621], 5.00th=[ 3720], 10.00th=[ 4146], 20.00th=[ 7373], 00:26:05.995 | 30.00th=[ 8160], 40.00th=[ 9241], 50.00th=[10683], 60.00th=[11994], 00:26:05.995 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15270], 95.00th=[16712], 00:26:05.996 | 99.00th=[18220], 99.50th=[19006], 99.90th=[50070], 99.95th=[51119], 00:26:05.996 | 99.99th=[51119] 00:26:05.996 bw ( KiB/s): min=26368, max=46848, per=37.19%, avg=36255.60, stdev=7275.35, samples=10 00:26:05.996 iops : min= 206, max= 366, avg=283.20, stdev=56.88, samples=10 00:26:05.996 lat (msec) : 4=8.39%, 10=38.08%, 20=53.10%, 50=0.21%, 100=0.21% 00:26:05.996 cpu : usr=93.07%, sys=5.16%, ctx=6, majf=0, minf=11 00:26:05.996 IO depths : 1=22.6%, 2=77.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.996 issued rwts: total=1418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.996 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.996 filename0: (groupid=0, jobs=1): err= 0: pid=102117: Fri Dec 13 13:10:46 2024 00:26:05.996 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(132MiB/5006msec) 00:26:05.996 slat (nsec): min=6076, max=73114, avg=15404.44, stdev=8551.11 00:26:05.996 clat (usec): min=3671, max=55975, avg=14208.45, stdev=11790.96 00:26:05.996 lat (usec): min=3682, max=55982, avg=14223.86, stdev=11791.18 00:26:05.996 clat percentiles (usec): 00:26:05.996 | 1.00th=[ 4047], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 8717], 00:26:05.996 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10945], 60.00th=[11731], 00:26:05.996 | 70.00th=[12387], 80.00th=[13698], 90.00th=[15795], 95.00th=[50070], 00:26:05.996 | 99.00th=[53740], 99.50th=[55313], 99.90th=[55313], 99.95th=[55837], 00:26:05.996 | 99.99th=[55837] 00:26:05.996 bw ( KiB/s): min=20736, max=37888, per=27.69%, avg=26993.78, stdev=5798.43, samples=9 00:26:05.996 iops : min= 162, max= 296, avg=210.89, stdev=45.30, samples=9 00:26:05.996 lat (msec) : 4=0.76%, 10=33.46%, 20=56.68%, 50=4.64%, 100=4.45% 00:26:05.996 cpu : usr=94.47%, sys=4.04%, ctx=75, majf=0, minf=9 00:26:05.996 IO depths : 1=6.0%, 2=94.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.996 issued rwts: total=1055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.996 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.996 00:26:05.996 Run status group 0 (all jobs): 00:26:05.996 READ: bw=95.2MiB/s (99.8MB/s), 26.3MiB/s-35.4MiB/s (27.6MB/s-37.1MB/s), io=477MiB (500MB), run=5005-5008msec 00:26:05.996 13:10:46 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:05.996 13:10:46 -- target/dif.sh@43 -- # local sub 00:26:05.996 13:10:46 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.996 13:10:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:05.996 13:10:46 -- target/dif.sh@36 -- # local sub_id=0 00:26:05.996 13:10:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:05.996 13:10:46 -- target/dif.sh@109 -- # bs=4k 00:26:05.996 13:10:46 -- target/dif.sh@109 -- # numjobs=8 00:26:05.996 13:10:46 -- target/dif.sh@109 -- # iodepth=16 00:26:05.996 13:10:46 -- target/dif.sh@109 -- # runtime= 00:26:05.996 13:10:46 -- target/dif.sh@109 -- # files=2 00:26:05.996 13:10:46 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:05.996 13:10:46 -- target/dif.sh@28 -- # local sub 00:26:05.996 13:10:46 -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.996 13:10:46 -- target/dif.sh@31 -- # create_subsystem 0 00:26:05.996 13:10:46 -- target/dif.sh@18 -- # local sub_id=0 00:26:05.996 13:10:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 bdev_null0 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 [2024-12-13 13:10:46.710740] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.996 13:10:46 -- target/dif.sh@31 -- # create_subsystem 1 00:26:05.996 13:10:46 -- target/dif.sh@18 -- # local sub_id=1 00:26:05.996 13:10:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 bdev_null1 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.996 13:10:46 -- target/dif.sh@31 -- # create_subsystem 2 00:26:05.996 13:10:46 -- target/dif.sh@18 -- # local sub_id=2 00:26:05.996 13:10:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:05.996 bdev_null2 00:26:05.996 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.996 13:10:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:05.996 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.996 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:06.255 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.255 13:10:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:06.255 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.255 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:06.255 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.255 13:10:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:06.255 13:10:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.255 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:06.255 13:10:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.255 13:10:46 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:06.255 13:10:46 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:06.255 13:10:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:06.255 13:10:46 -- nvmf/common.sh@520 -- # config=() 00:26:06.255 13:10:46 -- nvmf/common.sh@520 -- # local subsystem config 00:26:06.255 13:10:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:06.255 13:10:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:06.255 { 00:26:06.255 "params": { 00:26:06.255 "name": "Nvme$subsystem", 00:26:06.255 "trtype": "$TEST_TRANSPORT", 00:26:06.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.255 "adrfam": "ipv4", 00:26:06.255 "trsvcid": "$NVMF_PORT", 00:26:06.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.255 "hdgst": ${hdgst:-false}, 00:26:06.255 "ddgst": ${ddgst:-false} 00:26:06.255 }, 00:26:06.255 "method": "bdev_nvme_attach_controller" 00:26:06.255 } 00:26:06.255 EOF 00:26:06.255 )") 00:26:06.255 13:10:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.255 13:10:46 -- target/dif.sh@82 -- # gen_fio_conf 00:26:06.255 13:10:46 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.255 13:10:46 -- target/dif.sh@54 -- # local file 00:26:06.255 13:10:46 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:06.255 13:10:46 -- target/dif.sh@56 -- # cat 00:26:06.255 13:10:46 -- nvmf/common.sh@542 -- # cat 00:26:06.255 13:10:46 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:06.255 13:10:46 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:06.255 13:10:46 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:06.255 13:10:46 -- common/autotest_common.sh@1330 -- # shift 00:26:06.255 13:10:46 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:06.255 13:10:46 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.255 13:10:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:06.255 13:10:46 -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.256 13:10:46 -- target/dif.sh@73 -- # cat 00:26:06.256 13:10:46 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:06.256 13:10:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:06.256 13:10:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:06.256 { 00:26:06.256 "params": { 00:26:06.256 "name": "Nvme$subsystem", 00:26:06.256 "trtype": "$TEST_TRANSPORT", 00:26:06.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.256 "adrfam": "ipv4", 00:26:06.256 "trsvcid": "$NVMF_PORT", 00:26:06.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.256 "hdgst": ${hdgst:-false}, 00:26:06.256 "ddgst": ${ddgst:-false} 00:26:06.256 }, 00:26:06.256 "method": "bdev_nvme_attach_controller" 00:26:06.256 } 00:26:06.256 EOF 00:26:06.256 )") 00:26:06.256 13:10:46 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:06.256 13:10:46 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:06.256 13:10:46 -- nvmf/common.sh@542 -- # cat 00:26:06.256 13:10:46 -- target/dif.sh@72 -- # (( file++ )) 00:26:06.256 13:10:46 -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.256 13:10:46 -- target/dif.sh@73 -- # cat 00:26:06.256 13:10:46 -- target/dif.sh@72 -- # (( file++ )) 00:26:06.256 13:10:46 -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.256 13:10:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:06.256 13:10:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:06.256 { 00:26:06.256 "params": { 00:26:06.256 "name": "Nvme$subsystem", 00:26:06.256 "trtype": "$TEST_TRANSPORT", 00:26:06.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.256 "adrfam": "ipv4", 00:26:06.256 "trsvcid": "$NVMF_PORT", 00:26:06.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.256 "hdgst": ${hdgst:-false}, 00:26:06.256 "ddgst": ${ddgst:-false} 00:26:06.256 }, 00:26:06.256 "method": "bdev_nvme_attach_controller" 00:26:06.256 } 00:26:06.256 EOF 00:26:06.256 )") 00:26:06.256 13:10:46 -- nvmf/common.sh@542 -- # cat 00:26:06.256 13:10:46 -- nvmf/common.sh@544 -- # jq . 00:26:06.256 13:10:46 -- nvmf/common.sh@545 -- # IFS=, 00:26:06.256 13:10:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:06.256 "params": { 00:26:06.256 "name": "Nvme0", 00:26:06.256 "trtype": "tcp", 00:26:06.256 "traddr": "10.0.0.2", 00:26:06.256 "adrfam": "ipv4", 00:26:06.256 "trsvcid": "4420", 00:26:06.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.256 "hdgst": false, 00:26:06.256 "ddgst": false 00:26:06.256 }, 00:26:06.256 "method": "bdev_nvme_attach_controller" 00:26:06.256 },{ 00:26:06.256 "params": { 00:26:06.256 "name": "Nvme1", 00:26:06.256 "trtype": "tcp", 00:26:06.256 "traddr": "10.0.0.2", 00:26:06.256 "adrfam": "ipv4", 00:26:06.256 "trsvcid": "4420", 00:26:06.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:06.256 "hdgst": false, 00:26:06.256 "ddgst": false 00:26:06.256 }, 00:26:06.256 "method": "bdev_nvme_attach_controller" 00:26:06.256 },{ 00:26:06.256 "params": { 00:26:06.256 "name": "Nvme2", 00:26:06.256 "trtype": "tcp", 00:26:06.256 "traddr": "10.0.0.2", 00:26:06.256 "adrfam": "ipv4", 00:26:06.256 "trsvcid": "4420", 00:26:06.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:06.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:06.256 "hdgst": false, 00:26:06.256 "ddgst": false 00:26:06.256 }, 00:26:06.256 "method": "bdev_nvme_attach_controller" 00:26:06.256 }' 00:26:06.256 13:10:46 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:06.256 13:10:46 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:06.256 13:10:46 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.256 13:10:46 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:06.256 13:10:46 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:06.256 13:10:46 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:06.256 13:10:46 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:06.256 13:10:46 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:06.256 13:10:46 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:06.256 13:10:46 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.256 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:06.256 ... 00:26:06.256 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:06.256 ... 00:26:06.256 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:06.256 ... 00:26:06.256 fio-3.35 00:26:06.256 Starting 24 threads 00:26:07.192 [2024-12-13 13:10:47.654830] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:07.192 [2024-12-13 13:10:47.654908] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:17.190 00:26:17.190 filename0: (groupid=0, jobs=1): err= 0: pid=102212: Fri Dec 13 13:10:57 2024 00:26:17.190 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.79MiB/10034msec) 00:26:17.190 slat (usec): min=6, max=8073, avg=22.76, stdev=289.37 00:26:17.190 clat (msec): min=24, max=150, avg=63.93, stdev=22.37 00:26:17.190 lat (msec): min=24, max=150, avg=63.95, stdev=22.37 00:26:17.190 clat percentiles (msec): 00:26:17.190 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 46], 00:26:17.190 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:26:17.190 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:26:17.190 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 150], 00:26:17.190 | 99.99th=[ 150] 00:26:17.190 bw ( KiB/s): min= 640, max= 1376, per=4.46%, avg=995.70, stdev=203.04, samples=20 00:26:17.190 iops : min= 160, max= 344, avg=248.90, stdev=50.77, samples=20 00:26:17.190 lat (msec) : 50=32.44%, 100=60.53%, 250=7.02% 00:26:17.190 cpu : usr=33.32%, sys=0.57%, ctx=999, majf=0, minf=9 00:26:17.190 IO depths : 1=0.7%, 2=1.6%, 4=7.4%, 8=76.8%, 16=13.5%, 32=0.0%, >=64=0.0% 00:26:17.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 complete : 0=0.0%, 4=89.6%, 8=6.5%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 issued rwts: total=2506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.190 filename0: (groupid=0, jobs=1): err= 0: pid=102213: Fri Dec 13 13:10:57 2024 00:26:17.190 read: IOPS=255, BW=1022KiB/s (1047kB/s)(10.0MiB/10062msec) 00:26:17.190 slat (usec): min=6, max=7029, avg=25.15, stdev=263.99 00:26:17.190 clat (msec): min=25, max=156, avg=62.47, stdev=23.32 00:26:17.190 lat (msec): min=25, max=156, avg=62.50, stdev=23.32 00:26:17.190 clat percentiles (msec): 00:26:17.190 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 44], 00:26:17.190 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 64], 00:26:17.190 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 111], 00:26:17.190 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:26:17.190 | 99.99th=[ 157] 00:26:17.190 bw ( KiB/s): min= 640, max= 1344, per=4.59%, avg=1022.05, stdev=200.88, samples=20 00:26:17.190 iops : min= 160, max= 336, avg=255.50, stdev=50.24, samples=20 00:26:17.190 lat (msec) : 50=36.68%, 100=56.83%, 250=6.50% 00:26:17.190 cpu : usr=41.73%, sys=0.69%, ctx=1278, majf=0, minf=9 00:26:17.190 IO depths : 1=1.4%, 2=2.9%, 4=11.0%, 8=73.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:17.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 complete : 0=0.0%, 4=90.1%, 8=4.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 issued rwts: total=2571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.190 filename0: (groupid=0, jobs=1): err= 0: pid=102214: Fri Dec 13 13:10:57 2024 00:26:17.190 read: IOPS=238, BW=953KiB/s (976kB/s)(9576KiB/10044msec) 00:26:17.190 slat (usec): min=6, max=4038, avg=15.07, stdev=116.28 00:26:17.190 clat (msec): min=32, max=144, avg=66.97, stdev=21.21 00:26:17.190 lat (msec): min=32, max=144, avg=66.98, stdev=21.21 00:26:17.190 clat percentiles (msec): 00:26:17.190 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 50], 00:26:17.190 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 69], 00:26:17.190 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 110], 00:26:17.190 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:26:17.190 | 99.99th=[ 144] 00:26:17.190 bw ( KiB/s): min= 688, max= 1200, per=4.26%, avg=951.00, stdev=162.83, samples=20 00:26:17.190 iops : min= 172, max= 300, avg=237.70, stdev=40.76, samples=20 00:26:17.190 lat (msec) : 50=21.64%, 100=69.97%, 250=8.40% 00:26:17.190 cpu : usr=42.16%, sys=0.83%, ctx=1242, majf=0, minf=9 00:26:17.190 IO depths : 1=1.3%, 2=2.9%, 4=9.7%, 8=73.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:17.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 complete : 0=0.0%, 4=90.1%, 8=5.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 issued rwts: total=2394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.190 filename0: (groupid=0, jobs=1): err= 0: pid=102215: Fri Dec 13 13:10:57 2024 00:26:17.190 read: IOPS=244, BW=978KiB/s (1001kB/s)(9812KiB/10036msec) 00:26:17.190 slat (usec): min=4, max=8019, avg=17.68, stdev=190.21 00:26:17.190 clat (msec): min=26, max=137, avg=65.31, stdev=20.88 00:26:17.190 lat (msec): min=26, max=137, avg=65.33, stdev=20.88 00:26:17.190 clat percentiles (msec): 00:26:17.190 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:26:17.190 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 68], 00:26:17.190 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 108], 00:26:17.190 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 138], 99.95th=[ 138], 00:26:17.190 | 99.99th=[ 138] 00:26:17.190 bw ( KiB/s): min= 528, max= 1344, per=4.37%, avg=974.60, stdev=193.98, samples=20 00:26:17.190 iops : min= 132, max= 336, avg=243.65, stdev=48.49, samples=20 00:26:17.190 lat (msec) : 50=26.13%, 100=66.25%, 250=7.62% 00:26:17.190 cpu : usr=37.59%, sys=0.59%, ctx=1044, majf=0, minf=9 00:26:17.190 IO depths : 1=1.2%, 2=2.9%, 4=10.4%, 8=73.5%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:17.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 issued rwts: total=2453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.190 filename0: (groupid=0, jobs=1): err= 0: pid=102216: Fri Dec 13 13:10:57 2024 00:26:17.190 read: IOPS=263, BW=1053KiB/s (1079kB/s)(10.3MiB/10055msec) 00:26:17.190 slat (usec): min=3, max=8017, avg=13.79, stdev=155.69 00:26:17.190 clat (msec): min=21, max=149, avg=60.62, stdev=20.62 00:26:17.190 lat (msec): min=21, max=149, avg=60.63, stdev=20.62 00:26:17.190 clat percentiles (msec): 00:26:17.190 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 46], 00:26:17.190 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 62], 00:26:17.190 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 86], 95.00th=[ 97], 00:26:17.190 | 99.00th=[ 128], 99.50th=[ 130], 99.90th=[ 150], 99.95th=[ 150], 00:26:17.190 | 99.99th=[ 150] 00:26:17.190 bw ( KiB/s): min= 704, max= 1328, per=4.72%, avg=1052.80, stdev=175.87, samples=20 00:26:17.190 iops : min= 176, max= 332, avg=263.20, stdev=43.97, samples=20 00:26:17.190 lat (msec) : 50=41.01%, 100=54.95%, 250=4.04% 00:26:17.190 cpu : usr=32.41%, sys=0.43%, ctx=879, majf=0, minf=9 00:26:17.190 IO depths : 1=0.3%, 2=0.8%, 4=5.4%, 8=79.6%, 16=13.9%, 32=0.0%, >=64=0.0% 00:26:17.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 complete : 0=0.0%, 4=89.1%, 8=7.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 issued rwts: total=2648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.190 filename0: (groupid=0, jobs=1): err= 0: pid=102217: Fri Dec 13 13:10:57 2024 00:26:17.190 read: IOPS=247, BW=990KiB/s (1014kB/s)(9960KiB/10063msec) 00:26:17.190 slat (usec): min=3, max=8038, avg=18.22, stdev=226.87 00:26:17.190 clat (msec): min=8, max=141, avg=64.44, stdev=21.22 00:26:17.190 lat (msec): min=8, max=141, avg=64.46, stdev=21.22 00:26:17.190 clat percentiles (msec): 00:26:17.190 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 47], 00:26:17.190 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 68], 00:26:17.190 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 103], 00:26:17.190 | 99.00th=[ 123], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 142], 00:26:17.190 | 99.99th=[ 142] 00:26:17.190 bw ( KiB/s): min= 608, max= 1240, per=4.44%, avg=989.60, stdev=155.50, samples=20 00:26:17.190 iops : min= 152, max= 310, avg=247.40, stdev=38.87, samples=20 00:26:17.190 lat (msec) : 10=0.64%, 50=28.15%, 100=65.22%, 250=5.98% 00:26:17.190 cpu : usr=35.95%, sys=0.65%, ctx=1158, majf=0, minf=9 00:26:17.190 IO depths : 1=0.8%, 2=2.5%, 4=10.8%, 8=73.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:17.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.190 issued rwts: total=2490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.190 filename0: (groupid=0, jobs=1): err= 0: pid=102218: Fri Dec 13 13:10:57 2024 00:26:17.190 read: IOPS=245, BW=984KiB/s (1007kB/s)(9880KiB/10043msec) 00:26:17.190 slat (usec): min=6, max=7034, avg=14.91, stdev=141.48 00:26:17.190 clat (msec): min=19, max=175, avg=64.94, stdev=23.92 00:26:17.190 lat (msec): min=19, max=175, avg=64.95, stdev=23.92 00:26:17.190 clat percentiles (msec): 00:26:17.190 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 43], 00:26:17.191 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 68], 00:26:17.191 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:26:17.191 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 176], 99.95th=[ 176], 00:26:17.191 | 99.99th=[ 176] 00:26:17.191 bw ( KiB/s): min= 552, max= 1424, per=4.40%, avg=981.35, stdev=230.11, samples=20 00:26:17.191 iops : min= 138, max= 356, avg=245.30, stdev=57.57, samples=20 00:26:17.191 lat (msec) : 20=0.24%, 50=31.70%, 100=59.27%, 250=8.79% 00:26:17.191 cpu : usr=40.29%, sys=0.74%, ctx=1208, majf=0, minf=9 00:26:17.191 IO depths : 1=0.7%, 2=1.5%, 4=8.1%, 8=76.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 complete : 0=0.0%, 4=89.5%, 8=6.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 issued rwts: total=2470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.191 filename0: (groupid=0, jobs=1): err= 0: pid=102219: Fri Dec 13 13:10:57 2024 00:26:17.191 read: IOPS=264, BW=1057KiB/s (1082kB/s)(10.4MiB/10070msec) 00:26:17.191 slat (usec): min=5, max=8024, avg=21.12, stdev=246.39 00:26:17.191 clat (msec): min=24, max=133, avg=60.36, stdev=19.15 00:26:17.191 lat (msec): min=24, max=133, avg=60.38, stdev=19.15 00:26:17.191 clat percentiles (msec): 00:26:17.191 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 45], 00:26:17.191 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 63], 00:26:17.191 | 70.00th=[ 68], 80.00th=[ 77], 90.00th=[ 84], 95.00th=[ 97], 00:26:17.191 | 99.00th=[ 118], 99.50th=[ 124], 99.90th=[ 134], 99.95th=[ 134], 00:26:17.191 | 99.99th=[ 134] 00:26:17.191 bw ( KiB/s): min= 744, max= 1376, per=4.74%, avg=1057.55, stdev=184.07, samples=20 00:26:17.191 iops : min= 186, max= 344, avg=264.35, stdev=46.03, samples=20 00:26:17.191 lat (msec) : 50=35.33%, 100=60.09%, 250=4.58% 00:26:17.191 cpu : usr=42.49%, sys=0.70%, ctx=1314, majf=0, minf=9 00:26:17.191 IO depths : 1=1.3%, 2=2.9%, 4=10.7%, 8=73.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 complete : 0=0.0%, 4=90.1%, 8=4.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 issued rwts: total=2661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.191 filename1: (groupid=0, jobs=1): err= 0: pid=102220: Fri Dec 13 13:10:57 2024 00:26:17.191 read: IOPS=215, BW=864KiB/s (885kB/s)(8664KiB/10028msec) 00:26:17.191 slat (usec): min=3, max=4496, avg=17.16, stdev=129.66 00:26:17.191 clat (msec): min=24, max=159, avg=73.95, stdev=22.88 00:26:17.191 lat (msec): min=24, max=159, avg=73.97, stdev=22.88 00:26:17.191 clat percentiles (msec): 00:26:17.191 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:26:17.191 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 77], 00:26:17.191 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 101], 95.00th=[ 117], 00:26:17.191 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 161], 99.95th=[ 161], 00:26:17.191 | 99.99th=[ 161] 00:26:17.191 bw ( KiB/s): min= 512, max= 1208, per=3.85%, avg=859.65, stdev=163.20, samples=20 00:26:17.191 iops : min= 128, max= 302, avg=214.90, stdev=40.78, samples=20 00:26:17.191 lat (msec) : 50=11.59%, 100=78.02%, 250=10.39% 00:26:17.191 cpu : usr=44.67%, sys=0.70%, ctx=1274, majf=0, minf=9 00:26:17.191 IO depths : 1=2.5%, 2=6.1%, 4=16.8%, 8=64.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:26:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.191 filename1: (groupid=0, jobs=1): err= 0: pid=102221: Fri Dec 13 13:10:57 2024 00:26:17.191 read: IOPS=211, BW=846KiB/s (866kB/s)(8484KiB/10028msec) 00:26:17.191 slat (usec): min=6, max=8029, avg=21.94, stdev=223.12 00:26:17.191 clat (msec): min=33, max=192, avg=75.43, stdev=23.10 00:26:17.191 lat (msec): min=33, max=192, avg=75.46, stdev=23.11 00:26:17.191 clat percentiles (msec): 00:26:17.191 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 60], 00:26:17.191 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 75], 00:26:17.191 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 122], 00:26:17.191 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 192], 99.95th=[ 192], 00:26:17.191 | 99.99th=[ 192] 00:26:17.191 bw ( KiB/s): min= 512, max= 1048, per=3.77%, avg=841.80, stdev=150.71, samples=20 00:26:17.191 iops : min= 128, max= 262, avg=210.40, stdev=37.71, samples=20 00:26:17.191 lat (msec) : 50=7.02%, 100=79.82%, 250=13.15% 00:26:17.191 cpu : usr=40.55%, sys=0.60%, ctx=1118, majf=0, minf=9 00:26:17.191 IO depths : 1=3.3%, 2=7.0%, 4=17.6%, 8=62.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:26:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.191 filename1: (groupid=0, jobs=1): err= 0: pid=102222: Fri Dec 13 13:10:57 2024 00:26:17.191 read: IOPS=251, BW=1007KiB/s (1031kB/s)(9.88MiB/10048msec) 00:26:17.191 slat (usec): min=5, max=8032, avg=18.64, stdev=225.08 00:26:17.191 clat (msec): min=26, max=144, avg=63.39, stdev=20.15 00:26:17.191 lat (msec): min=26, max=144, avg=63.41, stdev=20.15 00:26:17.191 clat percentiles (msec): 00:26:17.191 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 47], 00:26:17.191 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 68], 00:26:17.191 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 99], 00:26:17.191 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 146], 99.95th=[ 146], 00:26:17.191 | 99.99th=[ 146] 00:26:17.191 bw ( KiB/s): min= 656, max= 1376, per=4.50%, avg=1004.85, stdev=185.62, samples=20 00:26:17.191 iops : min= 164, max= 344, avg=251.20, stdev=46.40, samples=20 00:26:17.191 lat (msec) : 50=31.32%, 100=64.25%, 250=4.43% 00:26:17.191 cpu : usr=33.44%, sys=0.39%, ctx=903, majf=0, minf=9 00:26:17.191 IO depths : 1=0.6%, 2=1.7%, 4=8.5%, 8=76.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 issued rwts: total=2529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.191 filename1: (groupid=0, jobs=1): err= 0: pid=102223: Fri Dec 13 13:10:57 2024 00:26:17.191 read: IOPS=208, BW=833KiB/s (853kB/s)(8336KiB/10011msec) 00:26:17.191 slat (usec): min=3, max=8024, avg=28.41, stdev=349.90 00:26:17.191 clat (msec): min=34, max=177, avg=76.72, stdev=25.79 00:26:17.191 lat (msec): min=34, max=177, avg=76.75, stdev=25.79 00:26:17.191 clat percentiles (msec): 00:26:17.191 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 59], 00:26:17.191 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:26:17.191 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 112], 95.00th=[ 130], 00:26:17.191 | 99.00th=[ 163], 99.50th=[ 163], 99.90th=[ 178], 99.95th=[ 178], 00:26:17.191 | 99.99th=[ 178] 00:26:17.191 bw ( KiB/s): min= 512, max= 1152, per=3.71%, avg=827.20, stdev=164.74, samples=20 00:26:17.191 iops : min= 128, max= 288, avg=206.80, stdev=41.18, samples=20 00:26:17.191 lat (msec) : 50=10.03%, 100=73.61%, 250=16.36% 00:26:17.191 cpu : usr=32.32%, sys=0.48%, ctx=878, majf=0, minf=9 00:26:17.191 IO depths : 1=1.9%, 2=4.2%, 4=12.6%, 8=69.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 complete : 0=0.0%, 4=90.9%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.191 filename1: (groupid=0, jobs=1): err= 0: pid=102224: Fri Dec 13 13:10:57 2024 00:26:17.191 read: IOPS=242, BW=968KiB/s (992kB/s)(9740KiB/10058msec) 00:26:17.191 slat (nsec): min=3322, max=85317, avg=12075.41, stdev=7515.51 00:26:17.191 clat (msec): min=26, max=158, avg=65.94, stdev=21.92 00:26:17.191 lat (msec): min=26, max=158, avg=65.95, stdev=21.92 00:26:17.191 clat percentiles (msec): 00:26:17.191 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 47], 00:26:17.191 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 67], 00:26:17.191 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 111], 00:26:17.191 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 159], 99.95th=[ 159], 00:26:17.191 | 99.99th=[ 159] 00:26:17.191 bw ( KiB/s): min= 640, max= 1280, per=4.34%, avg=967.60, stdev=173.93, samples=20 00:26:17.191 iops : min= 160, max= 320, avg=241.90, stdev=43.48, samples=20 00:26:17.191 lat (msec) : 50=25.01%, 100=66.37%, 250=8.62% 00:26:17.191 cpu : usr=37.95%, sys=0.58%, ctx=1341, majf=0, minf=9 00:26:17.191 IO depths : 1=1.2%, 2=2.7%, 4=9.4%, 8=74.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 issued rwts: total=2435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.191 filename1: (groupid=0, jobs=1): err= 0: pid=102225: Fri Dec 13 13:10:57 2024 00:26:17.191 read: IOPS=215, BW=864KiB/s (884kB/s)(8664KiB/10033msec) 00:26:17.191 slat (usec): min=6, max=7085, avg=19.63, stdev=190.53 00:26:17.191 clat (msec): min=25, max=186, avg=73.96, stdev=22.64 00:26:17.191 lat (msec): min=25, max=186, avg=73.98, stdev=22.63 00:26:17.191 clat percentiles (msec): 00:26:17.191 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 59], 00:26:17.191 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 75], 00:26:17.191 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 99], 95.00th=[ 115], 00:26:17.191 | 99.00th=[ 163], 99.50th=[ 163], 99.90th=[ 186], 99.95th=[ 186], 00:26:17.191 | 99.99th=[ 186] 00:26:17.191 bw ( KiB/s): min= 480, max= 1120, per=3.86%, avg=860.00, stdev=163.88, samples=20 00:26:17.191 iops : min= 120, max= 280, avg=215.00, stdev=40.97, samples=20 00:26:17.191 lat (msec) : 50=13.16%, 100=77.98%, 250=8.86% 00:26:17.191 cpu : usr=35.42%, sys=0.65%, ctx=1090, majf=0, minf=9 00:26:17.191 IO depths : 1=1.5%, 2=4.0%, 4=13.2%, 8=69.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 complete : 0=0.0%, 4=91.0%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.191 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.191 filename1: (groupid=0, jobs=1): err= 0: pid=102226: Fri Dec 13 13:10:57 2024 00:26:17.191 read: IOPS=222, BW=889KiB/s (910kB/s)(8916KiB/10029msec) 00:26:17.191 slat (usec): min=5, max=8025, avg=27.01, stdev=271.28 00:26:17.191 clat (msec): min=23, max=183, avg=71.80, stdev=22.34 00:26:17.191 lat (msec): min=23, max=183, avg=71.83, stdev=22.34 00:26:17.191 clat percentiles (msec): 00:26:17.192 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 54], 00:26:17.192 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 77], 00:26:17.192 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 109], 00:26:17.192 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 184], 99.95th=[ 184], 00:26:17.192 | 99.99th=[ 184] 00:26:17.192 bw ( KiB/s): min= 512, max= 1088, per=3.97%, avg=884.85, stdev=149.59, samples=20 00:26:17.192 iops : min= 128, max= 272, avg=221.20, stdev=37.39, samples=20 00:26:17.192 lat (msec) : 50=17.77%, 100=72.50%, 250=9.74% 00:26:17.192 cpu : usr=39.79%, sys=0.67%, ctx=1308, majf=0, minf=9 00:26:17.192 IO depths : 1=0.9%, 2=2.0%, 4=9.3%, 8=75.0%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:17.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.192 filename1: (groupid=0, jobs=1): err= 0: pid=102227: Fri Dec 13 13:10:57 2024 00:26:17.192 read: IOPS=217, BW=869KiB/s (889kB/s)(8688KiB/10002msec) 00:26:17.192 slat (usec): min=6, max=6129, avg=16.83, stdev=157.33 00:26:17.192 clat (msec): min=3, max=177, avg=73.54, stdev=24.58 00:26:17.192 lat (msec): min=3, max=177, avg=73.56, stdev=24.58 00:26:17.192 clat percentiles (msec): 00:26:17.192 | 1.00th=[ 30], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 55], 00:26:17.192 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 75], 00:26:17.192 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 120], 00:26:17.192 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 178], 99.95th=[ 178], 00:26:17.192 | 99.99th=[ 178] 00:26:17.192 bw ( KiB/s): min= 536, max= 1280, per=3.92%, avg=874.11, stdev=167.23, samples=19 00:26:17.192 iops : min= 134, max= 320, avg=218.53, stdev=41.81, samples=19 00:26:17.192 lat (msec) : 4=0.09%, 20=0.64%, 50=15.70%, 100=71.55%, 250=12.02% 00:26:17.192 cpu : usr=42.11%, sys=0.88%, ctx=1346, majf=0, minf=9 00:26:17.192 IO depths : 1=2.3%, 2=5.2%, 4=14.5%, 8=66.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:17.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 complete : 0=0.0%, 4=91.4%, 8=3.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.192 filename2: (groupid=0, jobs=1): err= 0: pid=102228: Fri Dec 13 13:10:57 2024 00:26:17.192 read: IOPS=240, BW=960KiB/s (983kB/s)(9652KiB/10052msec) 00:26:17.192 slat (usec): min=4, max=8033, avg=15.20, stdev=163.45 00:26:17.192 clat (msec): min=27, max=190, avg=66.37, stdev=23.11 00:26:17.192 lat (msec): min=27, max=190, avg=66.39, stdev=23.10 00:26:17.192 clat percentiles (msec): 00:26:17.192 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 00:26:17.192 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 68], 00:26:17.192 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 111], 00:26:17.192 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 192], 99.95th=[ 192], 00:26:17.192 | 99.99th=[ 192] 00:26:17.192 bw ( KiB/s): min= 560, max= 1200, per=4.32%, avg=962.80, stdev=162.51, samples=20 00:26:17.192 iops : min= 140, max= 300, avg=240.70, stdev=40.63, samples=20 00:26:17.192 lat (msec) : 50=28.18%, 100=63.28%, 250=8.54% 00:26:17.192 cpu : usr=34.38%, sys=0.68%, ctx=938, majf=0, minf=9 00:26:17.192 IO depths : 1=1.0%, 2=2.3%, 4=9.2%, 8=75.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:17.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 issued rwts: total=2413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.192 filename2: (groupid=0, jobs=1): err= 0: pid=102229: Fri Dec 13 13:10:57 2024 00:26:17.192 read: IOPS=213, BW=856KiB/s (876kB/s)(8572KiB/10019msec) 00:26:17.192 slat (usec): min=4, max=10008, avg=24.58, stdev=302.39 00:26:17.192 clat (msec): min=24, max=175, avg=74.56, stdev=24.63 00:26:17.192 lat (msec): min=24, max=175, avg=74.59, stdev=24.63 00:26:17.192 clat percentiles (msec): 00:26:17.192 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 57], 00:26:17.192 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 74], 00:26:17.192 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 106], 95.00th=[ 123], 00:26:17.192 | 99.00th=[ 159], 99.50th=[ 176], 99.90th=[ 176], 99.95th=[ 176], 00:26:17.192 | 99.99th=[ 176] 00:26:17.192 bw ( KiB/s): min= 384, max= 1200, per=3.81%, avg=850.20, stdev=180.92, samples=20 00:26:17.192 iops : min= 96, max= 300, avg=212.50, stdev=45.30, samples=20 00:26:17.192 lat (msec) : 50=9.33%, 100=78.72%, 250=11.95% 00:26:17.192 cpu : usr=41.26%, sys=0.68%, ctx=1216, majf=0, minf=9 00:26:17.192 IO depths : 1=2.8%, 2=6.1%, 4=15.9%, 8=65.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:17.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 issued rwts: total=2143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.192 filename2: (groupid=0, jobs=1): err= 0: pid=102230: Fri Dec 13 13:10:57 2024 00:26:17.192 read: IOPS=227, BW=909KiB/s (931kB/s)(9128KiB/10037msec) 00:26:17.192 slat (usec): min=3, max=8037, avg=24.73, stdev=302.53 00:26:17.192 clat (msec): min=22, max=190, avg=70.15, stdev=25.67 00:26:17.192 lat (msec): min=22, max=190, avg=70.17, stdev=25.66 00:26:17.192 clat percentiles (msec): 00:26:17.192 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 48], 00:26:17.192 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:26:17.192 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 106], 95.00th=[ 117], 00:26:17.192 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 190], 99.95th=[ 190], 00:26:17.192 | 99.99th=[ 190] 00:26:17.192 bw ( KiB/s): min= 440, max= 1456, per=4.07%, avg=906.45, stdev=233.18, samples=20 00:26:17.192 iops : min= 110, max= 364, avg=226.60, stdev=58.30, samples=20 00:26:17.192 lat (msec) : 50=24.72%, 100=62.53%, 250=12.75% 00:26:17.192 cpu : usr=38.63%, sys=0.76%, ctx=1252, majf=0, minf=9 00:26:17.192 IO depths : 1=1.5%, 2=3.6%, 4=12.0%, 8=71.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:17.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.192 filename2: (groupid=0, jobs=1): err= 0: pid=102231: Fri Dec 13 13:10:57 2024 00:26:17.192 read: IOPS=211, BW=847KiB/s (867kB/s)(8488KiB/10020msec) 00:26:17.192 slat (usec): min=3, max=12030, avg=23.37, stdev=325.30 00:26:17.192 clat (msec): min=29, max=208, avg=75.37, stdev=26.14 00:26:17.192 lat (msec): min=29, max=208, avg=75.39, stdev=26.14 00:26:17.192 clat percentiles (msec): 00:26:17.192 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 57], 00:26:17.192 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 77], 00:26:17.192 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 124], 00:26:17.192 | 99.00th=[ 169], 99.50th=[ 171], 99.90th=[ 209], 99.95th=[ 209], 00:26:17.192 | 99.99th=[ 209] 00:26:17.192 bw ( KiB/s): min= 472, max= 1176, per=3.77%, avg=841.85, stdev=179.60, samples=20 00:26:17.192 iops : min= 118, max= 294, avg=210.40, stdev=44.98, samples=20 00:26:17.192 lat (msec) : 50=11.59%, 100=73.66%, 250=14.75% 00:26:17.192 cpu : usr=42.21%, sys=0.61%, ctx=1184, majf=0, minf=9 00:26:17.192 IO depths : 1=3.3%, 2=7.1%, 4=18.2%, 8=61.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:26:17.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 issued rwts: total=2122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.192 filename2: (groupid=0, jobs=1): err= 0: pid=102232: Fri Dec 13 13:10:57 2024 00:26:17.192 read: IOPS=215, BW=862KiB/s (882kB/s)(8624KiB/10009msec) 00:26:17.192 slat (nsec): min=6278, max=63000, avg=12697.04, stdev=7353.00 00:26:17.192 clat (msec): min=17, max=176, avg=74.19, stdev=22.31 00:26:17.192 lat (msec): min=17, max=176, avg=74.20, stdev=22.31 00:26:17.192 clat percentiles (msec): 00:26:17.192 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 58], 00:26:17.192 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 78], 00:26:17.192 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 102], 95.00th=[ 117], 00:26:17.192 | 99.00th=[ 140], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 176], 00:26:17.192 | 99.99th=[ 178] 00:26:17.192 bw ( KiB/s): min= 680, max= 1072, per=3.86%, avg=860.63, stdev=105.81, samples=19 00:26:17.192 iops : min= 170, max= 268, avg=215.16, stdev=26.45, samples=19 00:26:17.192 lat (msec) : 20=0.46%, 50=11.78%, 100=76.95%, 250=10.81% 00:26:17.192 cpu : usr=41.86%, sys=0.63%, ctx=1128, majf=0, minf=9 00:26:17.192 IO depths : 1=1.1%, 2=2.6%, 4=10.2%, 8=73.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:17.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.192 filename2: (groupid=0, jobs=1): err= 0: pid=102233: Fri Dec 13 13:10:57 2024 00:26:17.192 read: IOPS=213, BW=853KiB/s (874kB/s)(8548KiB/10017msec) 00:26:17.192 slat (usec): min=4, max=8045, avg=19.71, stdev=245.53 00:26:17.192 clat (msec): min=22, max=224, avg=74.79, stdev=25.75 00:26:17.192 lat (msec): min=22, max=224, avg=74.81, stdev=25.75 00:26:17.192 clat percentiles (msec): 00:26:17.192 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 58], 00:26:17.192 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 74], 00:26:17.192 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 116], 00:26:17.192 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 224], 99.95th=[ 224], 00:26:17.192 | 99.99th=[ 224] 00:26:17.192 bw ( KiB/s): min= 384, max= 1120, per=3.82%, avg=852.10, stdev=156.20, samples=20 00:26:17.192 iops : min= 96, max= 280, avg=213.00, stdev=39.05, samples=20 00:26:17.192 lat (msec) : 50=11.79%, 100=77.02%, 250=11.18% 00:26:17.192 cpu : usr=34.52%, sys=0.49%, ctx=920, majf=0, minf=9 00:26:17.192 IO depths : 1=1.6%, 2=3.7%, 4=12.4%, 8=70.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:17.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.192 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.192 filename2: (groupid=0, jobs=1): err= 0: pid=102234: Fri Dec 13 13:10:57 2024 00:26:17.192 read: IOPS=231, BW=926KiB/s (949kB/s)(9304KiB/10043msec) 00:26:17.192 slat (usec): min=4, max=8018, avg=25.74, stdev=331.45 00:26:17.192 clat (msec): min=26, max=175, avg=68.86, stdev=22.22 00:26:17.192 lat (msec): min=26, max=175, avg=68.89, stdev=22.21 00:26:17.192 clat percentiles (msec): 00:26:17.193 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 49], 00:26:17.193 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:26:17.193 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 113], 00:26:17.193 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 153], 00:26:17.193 | 99.99th=[ 176] 00:26:17.193 bw ( KiB/s): min= 632, max= 1248, per=4.14%, avg=923.70, stdev=166.36, samples=20 00:26:17.193 iops : min= 158, max= 312, avg=230.90, stdev=41.59, samples=20 00:26:17.193 lat (msec) : 50=23.00%, 100=68.10%, 250=8.90% 00:26:17.193 cpu : usr=32.24%, sys=0.55%, ctx=886, majf=0, minf=9 00:26:17.193 IO depths : 1=0.8%, 2=2.0%, 4=8.7%, 8=75.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:17.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.193 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.193 issued rwts: total=2326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.193 filename2: (groupid=0, jobs=1): err= 0: pid=102235: Fri Dec 13 13:10:57 2024 00:26:17.193 read: IOPS=242, BW=970KiB/s (993kB/s)(9728KiB/10028msec) 00:26:17.193 slat (usec): min=4, max=8006, avg=23.31, stdev=255.80 00:26:17.193 clat (msec): min=25, max=173, avg=65.79, stdev=23.24 00:26:17.193 lat (msec): min=25, max=173, avg=65.82, stdev=23.24 00:26:17.193 clat percentiles (msec): 00:26:17.193 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:26:17.193 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 68], 00:26:17.193 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 109], 00:26:17.193 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 174], 99.95th=[ 174], 00:26:17.193 | 99.99th=[ 174] 00:26:17.193 bw ( KiB/s): min= 528, max= 1280, per=4.33%, avg=966.15, stdev=205.32, samples=20 00:26:17.193 iops : min= 132, max= 320, avg=241.50, stdev=51.31, samples=20 00:26:17.193 lat (msec) : 50=29.61%, 100=62.50%, 250=7.89% 00:26:17.193 cpu : usr=41.40%, sys=0.78%, ctx=1190, majf=0, minf=9 00:26:17.193 IO depths : 1=1.5%, 2=3.5%, 4=11.3%, 8=71.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:17.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.193 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.193 issued rwts: total=2432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.193 00:26:17.193 Run status group 0 (all jobs): 00:26:17.193 READ: bw=21.8MiB/s (22.8MB/s), 833KiB/s-1057KiB/s (853kB/s-1082kB/s), io=219MiB (230MB), run=10002-10070msec 00:26:17.452 13:10:58 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:17.452 13:10:58 -- target/dif.sh@43 -- # local sub 00:26:17.452 13:10:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.452 13:10:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.452 13:10:58 -- target/dif.sh@36 -- # local sub_id=0 00:26:17.452 13:10:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.452 13:10:58 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:17.452 13:10:58 -- target/dif.sh@36 -- # local sub_id=1 00:26:17.452 13:10:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.452 13:10:58 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:17.452 13:10:58 -- target/dif.sh@36 -- # local sub_id=2 00:26:17.452 13:10:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:17.452 13:10:58 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:17.452 13:10:58 -- target/dif.sh@115 -- # numjobs=2 00:26:17.452 13:10:58 -- target/dif.sh@115 -- # iodepth=8 00:26:17.452 13:10:58 -- target/dif.sh@115 -- # runtime=5 00:26:17.452 13:10:58 -- target/dif.sh@115 -- # files=1 00:26:17.452 13:10:58 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:17.452 13:10:58 -- target/dif.sh@28 -- # local sub 00:26:17.452 13:10:58 -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.452 13:10:58 -- target/dif.sh@31 -- # create_subsystem 0 00:26:17.452 13:10:58 -- target/dif.sh@18 -- # local sub_id=0 00:26:17.452 13:10:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 bdev_null0 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 [2024-12-13 13:10:58.145313] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.452 13:10:58 -- target/dif.sh@31 -- # create_subsystem 1 00:26:17.452 13:10:58 -- target/dif.sh@18 -- # local sub_id=1 00:26:17.452 13:10:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 bdev_null1 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.452 13:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.452 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:26:17.452 13:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.452 13:10:58 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:17.452 13:10:58 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:17.452 13:10:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:17.452 13:10:58 -- nvmf/common.sh@520 -- # config=() 00:26:17.452 13:10:58 -- nvmf/common.sh@520 -- # local subsystem config 00:26:17.452 13:10:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:17.452 13:10:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:17.452 { 00:26:17.452 "params": { 00:26:17.452 "name": "Nvme$subsystem", 00:26:17.452 "trtype": "$TEST_TRANSPORT", 00:26:17.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.452 "adrfam": "ipv4", 00:26:17.452 "trsvcid": "$NVMF_PORT", 00:26:17.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.452 "hdgst": ${hdgst:-false}, 00:26:17.452 "ddgst": ${ddgst:-false} 00:26:17.452 }, 00:26:17.452 "method": "bdev_nvme_attach_controller" 00:26:17.452 } 00:26:17.452 EOF 00:26:17.452 )") 00:26:17.453 13:10:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.453 13:10:58 -- target/dif.sh@82 -- # gen_fio_conf 00:26:17.453 13:10:58 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.453 13:10:58 -- target/dif.sh@54 -- # local file 00:26:17.453 13:10:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:17.453 13:10:58 -- target/dif.sh@56 -- # cat 00:26:17.453 13:10:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.453 13:10:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:17.453 13:10:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:17.453 13:10:58 -- nvmf/common.sh@542 -- # cat 00:26:17.453 13:10:58 -- common/autotest_common.sh@1330 -- # shift 00:26:17.453 13:10:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:17.453 13:10:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.453 13:10:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:17.453 13:10:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:17.453 13:10:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:17.453 13:10:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:17.453 13:10:58 -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.453 13:10:58 -- target/dif.sh@73 -- # cat 00:26:17.453 13:10:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:17.453 13:10:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:17.453 { 00:26:17.453 "params": { 00:26:17.453 "name": "Nvme$subsystem", 00:26:17.453 "trtype": "$TEST_TRANSPORT", 00:26:17.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.453 "adrfam": "ipv4", 00:26:17.453 "trsvcid": "$NVMF_PORT", 00:26:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.453 "hdgst": ${hdgst:-false}, 00:26:17.453 "ddgst": ${ddgst:-false} 00:26:17.453 }, 00:26:17.453 "method": "bdev_nvme_attach_controller" 00:26:17.453 } 00:26:17.453 EOF 00:26:17.453 )") 00:26:17.453 13:10:58 -- nvmf/common.sh@542 -- # cat 00:26:17.453 13:10:58 -- target/dif.sh@72 -- # (( file++ )) 00:26:17.453 13:10:58 -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.453 13:10:58 -- nvmf/common.sh@544 -- # jq . 00:26:17.453 13:10:58 -- nvmf/common.sh@545 -- # IFS=, 00:26:17.453 13:10:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:17.453 "params": { 00:26:17.453 "name": "Nvme0", 00:26:17.453 "trtype": "tcp", 00:26:17.453 "traddr": "10.0.0.2", 00:26:17.453 "adrfam": "ipv4", 00:26:17.453 "trsvcid": "4420", 00:26:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.453 "hdgst": false, 00:26:17.453 "ddgst": false 00:26:17.453 }, 00:26:17.453 "method": "bdev_nvme_attach_controller" 00:26:17.453 },{ 00:26:17.453 "params": { 00:26:17.453 "name": "Nvme1", 00:26:17.453 "trtype": "tcp", 00:26:17.453 "traddr": "10.0.0.2", 00:26:17.453 "adrfam": "ipv4", 00:26:17.453 "trsvcid": "4420", 00:26:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.453 "hdgst": false, 00:26:17.453 "ddgst": false 00:26:17.453 }, 00:26:17.453 "method": "bdev_nvme_attach_controller" 00:26:17.453 }' 00:26:17.453 13:10:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:17.453 13:10:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:17.453 13:10:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.453 13:10:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:17.453 13:10:58 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:17.453 13:10:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:17.712 13:10:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:17.712 13:10:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:17.712 13:10:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:17.712 13:10:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.712 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:17.712 ... 00:26:17.712 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:17.712 ... 00:26:17.712 fio-3.35 00:26:17.712 Starting 4 threads 00:26:18.279 [2024-12-13 13:10:58.870372] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:18.279 [2024-12-13 13:10:58.870452] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:23.547 00:26:23.547 filename0: (groupid=0, jobs=1): err= 0: pid=102367: Fri Dec 13 13:11:03 2024 00:26:23.547 read: IOPS=2174, BW=17.0MiB/s (17.8MB/s)(85.0MiB/5001msec) 00:26:23.547 slat (nsec): min=5790, max=82119, avg=18861.39, stdev=11802.14 00:26:23.547 clat (usec): min=1331, max=6389, avg=3597.18, stdev=241.27 00:26:23.547 lat (usec): min=1337, max=6428, avg=3616.04, stdev=240.12 00:26:23.547 clat percentiles (usec): 00:26:23.547 | 1.00th=[ 3032], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3458], 00:26:23.547 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3621], 00:26:23.547 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3851], 95.00th=[ 3982], 00:26:23.547 | 99.00th=[ 4228], 99.50th=[ 4621], 99.90th=[ 5538], 99.95th=[ 6128], 00:26:23.547 | 99.99th=[ 6194] 00:26:23.547 bw ( KiB/s): min=16896, max=17920, per=25.01%, avg=17427.56, stdev=304.29, samples=9 00:26:23.547 iops : min= 2112, max= 2240, avg=2178.44, stdev=38.04, samples=9 00:26:23.547 lat (msec) : 2=0.10%, 4=95.68%, 10=4.22% 00:26:23.547 cpu : usr=94.86%, sys=3.80%, ctx=6, majf=0, minf=9 00:26:23.547 IO depths : 1=7.7%, 2=19.8%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.547 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.547 issued rwts: total=10875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.547 filename0: (groupid=0, jobs=1): err= 0: pid=102368: Fri Dec 13 13:11:03 2024 00:26:23.547 read: IOPS=2188, BW=17.1MiB/s (17.9MB/s)(85.6MiB/5004msec) 00:26:23.547 slat (nsec): min=5938, max=76142, avg=10620.48, stdev=7835.79 00:26:23.547 clat (usec): min=875, max=5280, avg=3601.96, stdev=269.78 00:26:23.547 lat (usec): min=898, max=5300, avg=3612.58, stdev=270.11 00:26:23.547 clat percentiles (usec): 00:26:23.547 | 1.00th=[ 2966], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3490], 00:26:23.547 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:26:23.547 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3916], 00:26:23.547 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 5014], 99.95th=[ 5276], 00:26:23.547 | 99.99th=[ 5276] 00:26:23.547 bw ( KiB/s): min=17152, max=18048, per=25.21%, avg=17564.44, stdev=305.45, samples=9 00:26:23.547 iops : min= 2144, max= 2256, avg=2195.56, stdev=38.18, samples=9 00:26:23.547 lat (usec) : 1000=0.19% 00:26:23.547 lat (msec) : 2=0.45%, 4=96.18%, 10=3.18% 00:26:23.547 cpu : usr=95.08%, sys=3.76%, ctx=4, majf=0, minf=9 00:26:23.547 IO depths : 1=10.1%, 2=23.5%, 4=51.4%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.547 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.547 issued rwts: total=10952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.547 filename1: (groupid=0, jobs=1): err= 0: pid=102369: Fri Dec 13 13:11:03 2024 00:26:23.547 read: IOPS=2175, BW=17.0MiB/s (17.8MB/s)(85.0MiB/5001msec) 00:26:23.547 slat (nsec): min=6144, max=97865, avg=22681.03, stdev=12062.57 00:26:23.547 clat (usec): min=464, max=6445, avg=3563.99, stdev=248.82 00:26:23.547 lat (usec): min=470, max=6453, avg=3586.67, stdev=249.63 00:26:23.547 clat percentiles (usec): 00:26:23.547 | 1.00th=[ 3130], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:26:23.547 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3589], 00:26:23.547 | 70.00th=[ 3621], 80.00th=[ 3687], 90.00th=[ 3785], 95.00th=[ 3916], 00:26:23.547 | 99.00th=[ 4228], 99.50th=[ 4948], 99.90th=[ 5735], 99.95th=[ 5866], 00:26:23.547 | 99.99th=[ 6128] 00:26:23.547 bw ( KiB/s): min=16929, max=17904, per=25.01%, avg=17425.89, stdev=293.67, samples=9 00:26:23.547 iops : min= 2116, max= 2238, avg=2178.22, stdev=36.73, samples=9 00:26:23.547 lat (usec) : 500=0.03% 00:26:23.547 lat (msec) : 2=0.08%, 4=96.83%, 10=3.06% 00:26:23.547 cpu : usr=95.18%, sys=3.64%, ctx=6, majf=0, minf=9 00:26:23.547 IO depths : 1=10.3%, 2=24.5%, 4=50.5%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.547 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.547 issued rwts: total=10880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.547 filename1: (groupid=0, jobs=1): err= 0: pid=102370: Fri Dec 13 13:11:03 2024 00:26:23.547 read: IOPS=2175, BW=17.0MiB/s (17.8MB/s)(85.0MiB/5002msec) 00:26:23.547 slat (nsec): min=6030, max=97070, avg=23002.04, stdev=11871.10 00:26:23.547 clat (usec): min=2026, max=5785, avg=3565.75, stdev=196.46 00:26:23.547 lat (usec): min=2037, max=5811, avg=3588.75, stdev=196.78 00:26:23.547 clat percentiles (usec): 00:26:23.547 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:26:23.547 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3589], 00:26:23.547 | 70.00th=[ 3621], 80.00th=[ 3687], 90.00th=[ 3785], 95.00th=[ 3916], 00:26:23.547 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 5014], 99.95th=[ 5080], 00:26:23.547 | 99.99th=[ 5145] 00:26:23.547 bw ( KiB/s): min=16990, max=17920, per=25.02%, avg=17432.67, stdev=283.81, samples=9 00:26:23.547 iops : min= 2123, max= 2240, avg=2179.00, stdev=35.62, samples=9 00:26:23.547 lat (msec) : 4=97.17%, 10=2.83% 00:26:23.547 cpu : usr=94.72%, sys=3.64%, ctx=41, majf=0, minf=10 00:26:23.547 IO depths : 1=10.7%, 2=25.0%, 4=50.0%, 8=14.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.547 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.547 issued rwts: total=10880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.547 00:26:23.547 Run status group 0 (all jobs): 00:26:23.547 READ: bw=68.0MiB/s (71.4MB/s), 17.0MiB/s-17.1MiB/s (17.8MB/s-17.9MB/s), io=341MiB (357MB), run=5001-5004msec 00:26:23.547 13:11:04 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:23.547 13:11:04 -- target/dif.sh@43 -- # local sub 00:26:23.547 13:11:04 -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.547 13:11:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:23.547 13:11:04 -- target/dif.sh@36 -- # local sub_id=0 00:26:23.547 13:11:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:23.547 13:11:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.547 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.547 13:11:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.547 13:11:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:23.547 13:11:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.547 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.547 13:11:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.547 13:11:04 -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.547 13:11:04 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:23.547 13:11:04 -- target/dif.sh@36 -- # local sub_id=1 00:26:23.547 13:11:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.547 13:11:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.547 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.547 13:11:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.547 13:11:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:23.547 13:11:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.547 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.547 13:11:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.547 00:26:23.547 real 0m23.530s 00:26:23.547 user 2m7.603s 00:26:23.547 sys 0m3.853s 00:26:23.547 13:11:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:23.547 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.547 ************************************ 00:26:23.547 END TEST fio_dif_rand_params 00:26:23.547 ************************************ 00:26:23.547 13:11:04 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:23.547 13:11:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:23.547 13:11:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:23.547 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.547 ************************************ 00:26:23.547 START TEST fio_dif_digest 00:26:23.547 ************************************ 00:26:23.547 13:11:04 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:23.547 13:11:04 -- target/dif.sh@123 -- # local NULL_DIF 00:26:23.547 13:11:04 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:23.547 13:11:04 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:23.547 13:11:04 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:23.547 13:11:04 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:23.547 13:11:04 -- target/dif.sh@127 -- # numjobs=3 00:26:23.547 13:11:04 -- target/dif.sh@127 -- # iodepth=3 00:26:23.547 13:11:04 -- target/dif.sh@127 -- # runtime=10 00:26:23.548 13:11:04 -- target/dif.sh@128 -- # hdgst=true 00:26:23.548 13:11:04 -- target/dif.sh@128 -- # ddgst=true 00:26:23.548 13:11:04 -- target/dif.sh@130 -- # create_subsystems 0 00:26:23.548 13:11:04 -- target/dif.sh@28 -- # local sub 00:26:23.548 13:11:04 -- target/dif.sh@30 -- # for sub in "$@" 00:26:23.548 13:11:04 -- target/dif.sh@31 -- # create_subsystem 0 00:26:23.548 13:11:04 -- target/dif.sh@18 -- # local sub_id=0 00:26:23.548 13:11:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:23.548 13:11:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.548 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.548 bdev_null0 00:26:23.548 13:11:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.548 13:11:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:23.548 13:11:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.548 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.548 13:11:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.548 13:11:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:23.548 13:11:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.548 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.548 13:11:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.548 13:11:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.548 13:11:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.548 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:26:23.806 [2024-12-13 13:11:04.324101] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.806 13:11:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.806 13:11:04 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:23.806 13:11:04 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:23.806 13:11:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:23.806 13:11:04 -- nvmf/common.sh@520 -- # config=() 00:26:23.806 13:11:04 -- nvmf/common.sh@520 -- # local subsystem config 00:26:23.806 13:11:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:23.806 13:11:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:23.806 { 00:26:23.806 "params": { 00:26:23.806 "name": "Nvme$subsystem", 00:26:23.806 "trtype": "$TEST_TRANSPORT", 00:26:23.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.806 "adrfam": "ipv4", 00:26:23.806 "trsvcid": "$NVMF_PORT", 00:26:23.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.806 "hdgst": ${hdgst:-false}, 00:26:23.806 "ddgst": ${ddgst:-false} 00:26:23.806 }, 00:26:23.806 "method": "bdev_nvme_attach_controller" 00:26:23.806 } 00:26:23.806 EOF 00:26:23.806 )") 00:26:23.806 13:11:04 -- target/dif.sh@82 -- # gen_fio_conf 00:26:23.806 13:11:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.806 13:11:04 -- target/dif.sh@54 -- # local file 00:26:23.806 13:11:04 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.806 13:11:04 -- target/dif.sh@56 -- # cat 00:26:23.806 13:11:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:23.806 13:11:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:23.806 13:11:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:23.807 13:11:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:23.807 13:11:04 -- nvmf/common.sh@542 -- # cat 00:26:23.807 13:11:04 -- common/autotest_common.sh@1330 -- # shift 00:26:23.807 13:11:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:23.807 13:11:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.807 13:11:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:23.807 13:11:04 -- target/dif.sh@72 -- # (( file <= files )) 00:26:23.807 13:11:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:23.807 13:11:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:23.807 13:11:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:23.807 13:11:04 -- nvmf/common.sh@544 -- # jq . 00:26:23.807 13:11:04 -- nvmf/common.sh@545 -- # IFS=, 00:26:23.807 13:11:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:23.807 "params": { 00:26:23.807 "name": "Nvme0", 00:26:23.807 "trtype": "tcp", 00:26:23.807 "traddr": "10.0.0.2", 00:26:23.807 "adrfam": "ipv4", 00:26:23.807 "trsvcid": "4420", 00:26:23.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:23.807 "hdgst": true, 00:26:23.807 "ddgst": true 00:26:23.807 }, 00:26:23.807 "method": "bdev_nvme_attach_controller" 00:26:23.807 }' 00:26:23.807 13:11:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:23.807 13:11:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:23.807 13:11:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.807 13:11:04 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:23.807 13:11:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:23.807 13:11:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:23.807 13:11:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:23.807 13:11:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:23.807 13:11:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:23.807 13:11:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.807 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:23.807 ... 00:26:23.807 fio-3.35 00:26:23.807 Starting 3 threads 00:26:24.374 [2024-12-13 13:11:04.871675] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:24.374 [2024-12-13 13:11:04.871783] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:34.389 00:26:34.389 filename0: (groupid=0, jobs=1): err= 0: pid=102476: Fri Dec 13 13:11:15 2024 00:26:34.389 read: IOPS=267, BW=33.4MiB/s (35.1MB/s)(335MiB/10005msec) 00:26:34.389 slat (nsec): min=6829, max=43441, avg=11873.45, stdev=3673.76 00:26:34.389 clat (usec): min=6424, max=52019, avg=11200.25, stdev=1595.00 00:26:34.389 lat (usec): min=6436, max=52029, avg=11212.12, stdev=1595.15 00:26:34.389 clat percentiles (usec): 00:26:34.389 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:26:34.389 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:26:34.389 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:26:34.389 | 99.00th=[13173], 99.50th=[13566], 99.90th=[50594], 99.95th=[51643], 00:26:34.389 | 99.99th=[52167] 00:26:34.389 bw ( KiB/s): min=31744, max=36864, per=38.67%, avg=34290.53, stdev=1149.54, samples=19 00:26:34.389 iops : min= 248, max= 288, avg=267.89, stdev= 8.98, samples=19 00:26:34.389 lat (msec) : 10=8.67%, 20=91.22%, 100=0.11% 00:26:34.389 cpu : usr=93.17%, sys=5.41%, ctx=31, majf=0, minf=9 00:26:34.389 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.389 issued rwts: total=2676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.389 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:34.389 filename0: (groupid=0, jobs=1): err= 0: pid=102477: Fri Dec 13 13:11:15 2024 00:26:34.389 read: IOPS=239, BW=29.9MiB/s (31.3MB/s)(300MiB/10043msec) 00:26:34.389 slat (nsec): min=6694, max=50566, avg=11502.57, stdev=4025.41 00:26:34.389 clat (usec): min=5904, max=50937, avg=12508.87, stdev=1518.11 00:26:34.389 lat (usec): min=5914, max=50949, avg=12520.37, stdev=1518.15 00:26:34.389 clat percentiles (usec): 00:26:34.389 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11207], 20.00th=[11731], 00:26:34.389 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:26:34.389 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:26:34.389 | 99.00th=[15008], 99.50th=[15270], 99.90th=[16057], 99.95th=[51119], 00:26:34.389 | 99.99th=[51119] 00:26:34.389 bw ( KiB/s): min=29381, max=32256, per=34.64%, avg=30714.05, stdev=760.80, samples=20 00:26:34.389 iops : min= 229, max= 252, avg=239.90, stdev= 6.03, samples=20 00:26:34.389 lat (msec) : 10=1.00%, 20=98.92%, 100=0.08% 00:26:34.389 cpu : usr=92.17%, sys=6.49%, ctx=200, majf=0, minf=9 00:26:34.389 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.389 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.389 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:34.389 filename0: (groupid=0, jobs=1): err= 0: pid=102478: Fri Dec 13 13:11:15 2024 00:26:34.389 read: IOPS=187, BW=23.4MiB/s (24.5MB/s)(235MiB/10043msec) 00:26:34.389 slat (nsec): min=6647, max=48652, avg=10170.63, stdev=4301.34 00:26:34.389 clat (usec): min=9107, max=49590, avg=15990.73, stdev=1489.45 00:26:34.389 lat (usec): min=9115, max=49601, avg=16000.90, stdev=1489.81 00:26:34.389 clat percentiles (usec): 00:26:34.389 | 1.00th=[13566], 5.00th=[14353], 10.00th=[14746], 20.00th=[15270], 00:26:34.389 | 30.00th=[15533], 40.00th=[15795], 50.00th=[15926], 60.00th=[16188], 00:26:34.389 | 70.00th=[16450], 80.00th=[16712], 90.00th=[17171], 95.00th=[17433], 00:26:34.389 | 99.00th=[18220], 99.50th=[18482], 99.90th=[47449], 99.95th=[49546], 00:26:34.389 | 99.99th=[49546] 00:26:34.389 bw ( KiB/s): min=23040, max=26112, per=27.09%, avg=24023.25, stdev=791.51, samples=20 00:26:34.389 iops : min= 180, max= 204, avg=187.65, stdev= 6.22, samples=20 00:26:34.389 lat (msec) : 10=0.32%, 20=99.41%, 50=0.27% 00:26:34.389 cpu : usr=94.02%, sys=4.81%, ctx=120, majf=0, minf=11 00:26:34.389 IO depths : 1=26.3%, 2=73.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.389 issued rwts: total=1879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.389 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:34.389 00:26:34.389 Run status group 0 (all jobs): 00:26:34.389 READ: bw=86.6MiB/s (90.8MB/s), 23.4MiB/s-33.4MiB/s (24.5MB/s-35.1MB/s), io=870MiB (912MB), run=10005-10043msec 00:26:34.647 13:11:15 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:34.647 13:11:15 -- target/dif.sh@43 -- # local sub 00:26:34.647 13:11:15 -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.647 13:11:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:34.647 13:11:15 -- target/dif.sh@36 -- # local sub_id=0 00:26:34.648 13:11:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:34.648 13:11:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.648 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:26:34.648 13:11:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.648 13:11:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:34.648 13:11:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.648 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:26:34.648 13:11:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.648 00:26:34.648 real 0m10.971s 00:26:34.648 user 0m28.630s 00:26:34.648 sys 0m1.937s 00:26:34.648 13:11:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:34.648 ************************************ 00:26:34.648 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:26:34.648 END TEST fio_dif_digest 00:26:34.648 ************************************ 00:26:34.648 13:11:15 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:34.648 13:11:15 -- target/dif.sh@147 -- # nvmftestfini 00:26:34.648 13:11:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:34.648 13:11:15 -- nvmf/common.sh@116 -- # sync 00:26:34.648 13:11:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:34.648 13:11:15 -- nvmf/common.sh@119 -- # set +e 00:26:34.648 13:11:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:34.648 13:11:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:34.648 rmmod nvme_tcp 00:26:34.648 rmmod nvme_fabrics 00:26:34.648 rmmod nvme_keyring 00:26:34.648 13:11:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:34.648 13:11:15 -- nvmf/common.sh@123 -- # set -e 00:26:34.648 13:11:15 -- nvmf/common.sh@124 -- # return 0 00:26:34.648 13:11:15 -- nvmf/common.sh@477 -- # '[' -n 101714 ']' 00:26:34.648 13:11:15 -- nvmf/common.sh@478 -- # killprocess 101714 00:26:34.648 13:11:15 -- common/autotest_common.sh@936 -- # '[' -z 101714 ']' 00:26:34.648 13:11:15 -- common/autotest_common.sh@940 -- # kill -0 101714 00:26:34.648 13:11:15 -- common/autotest_common.sh@941 -- # uname 00:26:34.648 13:11:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:34.648 13:11:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101714 00:26:34.906 13:11:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:34.906 killing process with pid 101714 00:26:34.906 13:11:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:34.906 13:11:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101714' 00:26:34.906 13:11:15 -- common/autotest_common.sh@955 -- # kill 101714 00:26:34.906 13:11:15 -- common/autotest_common.sh@960 -- # wait 101714 00:26:34.906 13:11:15 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:34.906 13:11:15 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:35.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:35.473 Waiting for block devices as requested 00:26:35.473 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:35.473 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:35.473 13:11:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:35.473 13:11:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:35.473 13:11:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.473 13:11:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:35.473 13:11:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.473 13:11:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:35.473 13:11:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.732 13:11:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:35.732 00:26:35.732 real 0m59.940s 00:26:35.732 user 3m51.209s 00:26:35.732 sys 0m14.592s 00:26:35.732 13:11:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:35.732 13:11:16 -- common/autotest_common.sh@10 -- # set +x 00:26:35.732 ************************************ 00:26:35.732 END TEST nvmf_dif 00:26:35.732 ************************************ 00:26:35.732 13:11:16 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:35.732 13:11:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:35.732 13:11:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:35.732 13:11:16 -- common/autotest_common.sh@10 -- # set +x 00:26:35.732 ************************************ 00:26:35.732 START TEST nvmf_abort_qd_sizes 00:26:35.732 ************************************ 00:26:35.732 13:11:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:35.732 * Looking for test storage... 00:26:35.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:35.732 13:11:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:35.732 13:11:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:35.732 13:11:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:35.732 13:11:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:35.732 13:11:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:35.732 13:11:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:35.732 13:11:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:35.732 13:11:16 -- scripts/common.sh@335 -- # IFS=.-: 00:26:35.732 13:11:16 -- scripts/common.sh@335 -- # read -ra ver1 00:26:35.732 13:11:16 -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.732 13:11:16 -- scripts/common.sh@336 -- # read -ra ver2 00:26:35.732 13:11:16 -- scripts/common.sh@337 -- # local 'op=<' 00:26:35.732 13:11:16 -- scripts/common.sh@339 -- # ver1_l=2 00:26:35.732 13:11:16 -- scripts/common.sh@340 -- # ver2_l=1 00:26:35.732 13:11:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:35.732 13:11:16 -- scripts/common.sh@343 -- # case "$op" in 00:26:35.732 13:11:16 -- scripts/common.sh@344 -- # : 1 00:26:35.732 13:11:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:35.732 13:11:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.732 13:11:16 -- scripts/common.sh@364 -- # decimal 1 00:26:35.732 13:11:16 -- scripts/common.sh@352 -- # local d=1 00:26:35.732 13:11:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.732 13:11:16 -- scripts/common.sh@354 -- # echo 1 00:26:35.732 13:11:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:35.732 13:11:16 -- scripts/common.sh@365 -- # decimal 2 00:26:35.732 13:11:16 -- scripts/common.sh@352 -- # local d=2 00:26:35.732 13:11:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.732 13:11:16 -- scripts/common.sh@354 -- # echo 2 00:26:35.732 13:11:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:35.732 13:11:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:35.732 13:11:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:35.732 13:11:16 -- scripts/common.sh@367 -- # return 0 00:26:35.732 13:11:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.732 13:11:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.732 --rc genhtml_branch_coverage=1 00:26:35.732 --rc genhtml_function_coverage=1 00:26:35.732 --rc genhtml_legend=1 00:26:35.732 --rc geninfo_all_blocks=1 00:26:35.732 --rc geninfo_unexecuted_blocks=1 00:26:35.732 00:26:35.732 ' 00:26:35.732 13:11:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.732 --rc genhtml_branch_coverage=1 00:26:35.732 --rc genhtml_function_coverage=1 00:26:35.732 --rc genhtml_legend=1 00:26:35.732 --rc geninfo_all_blocks=1 00:26:35.732 --rc geninfo_unexecuted_blocks=1 00:26:35.732 00:26:35.732 ' 00:26:35.732 13:11:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.732 --rc genhtml_branch_coverage=1 00:26:35.732 --rc genhtml_function_coverage=1 00:26:35.732 --rc genhtml_legend=1 00:26:35.732 --rc geninfo_all_blocks=1 00:26:35.732 --rc geninfo_unexecuted_blocks=1 00:26:35.732 00:26:35.732 ' 00:26:35.732 13:11:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.732 --rc genhtml_branch_coverage=1 00:26:35.732 --rc genhtml_function_coverage=1 00:26:35.732 --rc genhtml_legend=1 00:26:35.732 --rc geninfo_all_blocks=1 00:26:35.732 --rc geninfo_unexecuted_blocks=1 00:26:35.732 00:26:35.732 ' 00:26:35.732 13:11:16 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:35.732 13:11:16 -- nvmf/common.sh@7 -- # uname -s 00:26:35.732 13:11:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.732 13:11:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.732 13:11:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.732 13:11:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.732 13:11:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.732 13:11:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.732 13:11:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.732 13:11:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.732 13:11:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.732 13:11:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.990 13:11:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 00:26:35.990 13:11:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=bbff34b2-04c8-46f0-b010-522cecaddf29 00:26:35.990 13:11:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.990 13:11:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.990 13:11:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:35.990 13:11:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:35.990 13:11:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.991 13:11:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.991 13:11:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.991 13:11:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.991 13:11:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.991 13:11:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.991 13:11:16 -- paths/export.sh@5 -- # export PATH 00:26:35.991 13:11:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.991 13:11:16 -- nvmf/common.sh@46 -- # : 0 00:26:35.991 13:11:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:35.991 13:11:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:35.991 13:11:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:35.991 13:11:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.991 13:11:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.991 13:11:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:35.991 13:11:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:35.991 13:11:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:35.991 13:11:16 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:35.991 13:11:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:35.991 13:11:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.991 13:11:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:35.991 13:11:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:35.991 13:11:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:35.991 13:11:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.991 13:11:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:35.991 13:11:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.991 13:11:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:35.991 13:11:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:35.991 13:11:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:35.991 13:11:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:35.991 13:11:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:35.991 13:11:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:35.991 13:11:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.991 13:11:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.991 13:11:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:35.991 13:11:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:35.991 13:11:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:35.991 13:11:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:35.991 13:11:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:35.991 13:11:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.991 13:11:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:35.991 13:11:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:35.991 13:11:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:35.991 13:11:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:35.991 13:11:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:35.991 13:11:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:35.991 Cannot find device "nvmf_tgt_br" 00:26:35.991 13:11:16 -- nvmf/common.sh@154 -- # true 00:26:35.991 13:11:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:35.991 Cannot find device "nvmf_tgt_br2" 00:26:35.991 13:11:16 -- nvmf/common.sh@155 -- # true 00:26:35.991 13:11:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:35.991 13:11:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:35.991 Cannot find device "nvmf_tgt_br" 00:26:35.991 13:11:16 -- nvmf/common.sh@157 -- # true 00:26:35.991 13:11:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:35.991 Cannot find device "nvmf_tgt_br2" 00:26:35.991 13:11:16 -- nvmf/common.sh@158 -- # true 00:26:35.991 13:11:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:35.991 13:11:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:35.991 13:11:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:35.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:35.991 13:11:16 -- nvmf/common.sh@161 -- # true 00:26:35.991 13:11:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:35.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:35.991 13:11:16 -- nvmf/common.sh@162 -- # true 00:26:35.991 13:11:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:35.991 13:11:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:35.991 13:11:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:35.991 13:11:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:35.991 13:11:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:35.991 13:11:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:35.991 13:11:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:35.991 13:11:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:35.991 13:11:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:35.991 13:11:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:35.991 13:11:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:35.991 13:11:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:35.991 13:11:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:35.991 13:11:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:35.991 13:11:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:35.991 13:11:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:35.991 13:11:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:35.991 13:11:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:36.250 13:11:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:36.250 13:11:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:36.250 13:11:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:36.250 13:11:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:36.250 13:11:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:36.250 13:11:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:36.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:26:36.250 00:26:36.250 --- 10.0.0.2 ping statistics --- 00:26:36.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.250 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:36.250 13:11:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:36.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:36.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:26:36.250 00:26:36.250 --- 10.0.0.3 ping statistics --- 00:26:36.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.250 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:36.250 13:11:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:36.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:36.250 00:26:36.250 --- 10.0.0.1 ping statistics --- 00:26:36.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.250 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:36.250 13:11:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.250 13:11:16 -- nvmf/common.sh@421 -- # return 0 00:26:36.250 13:11:16 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:36.250 13:11:16 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:36.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:37.076 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:37.076 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:37.076 13:11:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.076 13:11:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:37.076 13:11:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:37.076 13:11:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.076 13:11:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:37.076 13:11:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:37.076 13:11:17 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:37.076 13:11:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:37.076 13:11:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:37.076 13:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:37.076 13:11:17 -- nvmf/common.sh@469 -- # nvmfpid=103092 00:26:37.076 13:11:17 -- nvmf/common.sh@470 -- # waitforlisten 103092 00:26:37.076 13:11:17 -- common/autotest_common.sh@829 -- # '[' -z 103092 ']' 00:26:37.076 13:11:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.076 13:11:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:37.076 13:11:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:37.076 13:11:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.076 13:11:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:37.076 13:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:37.076 [2024-12-13 13:11:17.810685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:37.076 [2024-12-13 13:11:17.811450] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.334 [2024-12-13 13:11:17.951907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:37.334 [2024-12-13 13:11:18.029199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:37.334 [2024-12-13 13:11:18.029395] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.334 [2024-12-13 13:11:18.029411] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.334 [2024-12-13 13:11:18.029421] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.334 [2024-12-13 13:11:18.029586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.334 [2024-12-13 13:11:18.030538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.334 [2024-12-13 13:11:18.030694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:37.334 [2024-12-13 13:11:18.030700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.263 13:11:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.263 13:11:18 -- common/autotest_common.sh@862 -- # return 0 00:26:38.263 13:11:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:38.263 13:11:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:38.263 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.263 13:11:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:38.263 13:11:18 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:38.263 13:11:18 -- scripts/common.sh@312 -- # local nvmes 00:26:38.263 13:11:18 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:38.263 13:11:18 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:38.263 13:11:18 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:38.263 13:11:18 -- scripts/common.sh@297 -- # local bdf= 00:26:38.263 13:11:18 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:38.263 13:11:18 -- scripts/common.sh@232 -- # local class 00:26:38.263 13:11:18 -- scripts/common.sh@233 -- # local subclass 00:26:38.263 13:11:18 -- scripts/common.sh@234 -- # local progif 00:26:38.263 13:11:18 -- scripts/common.sh@235 -- # printf %02x 1 00:26:38.263 13:11:18 -- scripts/common.sh@235 -- # class=01 00:26:38.263 13:11:18 -- scripts/common.sh@236 -- # printf %02x 8 00:26:38.263 13:11:18 -- scripts/common.sh@236 -- # subclass=08 00:26:38.263 13:11:18 -- scripts/common.sh@237 -- # printf %02x 2 00:26:38.263 13:11:18 -- scripts/common.sh@237 -- # progif=02 00:26:38.263 13:11:18 -- scripts/common.sh@239 -- # hash lspci 00:26:38.263 13:11:18 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:38.263 13:11:18 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:38.263 13:11:18 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:38.263 13:11:18 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:38.263 13:11:18 -- scripts/common.sh@244 -- # tr -d '"' 00:26:38.263 13:11:18 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:38.263 13:11:18 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:38.263 13:11:18 -- scripts/common.sh@15 -- # local i 00:26:38.263 13:11:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:38.263 13:11:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:38.263 13:11:18 -- scripts/common.sh@24 -- # return 0 00:26:38.263 13:11:18 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:38.263 13:11:18 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:38.263 13:11:18 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:38.263 13:11:18 -- scripts/common.sh@15 -- # local i 00:26:38.263 13:11:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:38.263 13:11:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:38.263 13:11:18 -- scripts/common.sh@24 -- # return 0 00:26:38.263 13:11:18 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:38.263 13:11:18 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:38.263 13:11:18 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:38.263 13:11:18 -- scripts/common.sh@322 -- # uname -s 00:26:38.263 13:11:18 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:38.263 13:11:18 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:38.263 13:11:18 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:38.263 13:11:18 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:38.263 13:11:18 -- scripts/common.sh@322 -- # uname -s 00:26:38.263 13:11:18 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:38.263 13:11:18 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:38.263 13:11:18 -- scripts/common.sh@327 -- # (( 2 )) 00:26:38.263 13:11:18 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:38.263 13:11:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:38.263 13:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:38.263 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.263 ************************************ 00:26:38.263 START TEST spdk_target_abort 00:26:38.263 ************************************ 00:26:38.263 13:11:18 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:38.263 13:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.263 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.263 spdk_targetn1 00:26:38.263 13:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:38.263 13:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.263 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.263 [2024-12-13 13:11:18.975858] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.263 13:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:38.263 13:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.263 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.263 13:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.263 13:11:18 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:38.263 13:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.263 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.263 13:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:38.263 13:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.263 13:11:19 -- common/autotest_common.sh@10 -- # set +x 00:26:38.263 [2024-12-13 13:11:19.004018] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.263 13:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:38.263 13:11:19 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:41.544 Initializing NVMe Controllers 00:26:41.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:41.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:41.544 Initialization complete. Launching workers. 00:26:41.544 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10440, failed: 0 00:26:41.544 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1076, failed to submit 9364 00:26:41.544 success 812, unsuccess 264, failed 0 00:26:41.544 13:11:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:41.544 13:11:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:44.826 Initializing NVMe Controllers 00:26:44.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:44.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:44.826 Initialization complete. Launching workers. 00:26:44.826 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5980, failed: 0 00:26:44.826 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1232, failed to submit 4748 00:26:44.826 success 291, unsuccess 941, failed 0 00:26:44.826 13:11:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:44.826 13:11:25 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:48.108 Initializing NVMe Controllers 00:26:48.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:48.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:48.108 Initialization complete. Launching workers. 00:26:48.108 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30906, failed: 0 00:26:48.108 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2651, failed to submit 28255 00:26:48.108 success 510, unsuccess 2141, failed 0 00:26:48.108 13:11:28 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:48.108 13:11:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.108 13:11:28 -- common/autotest_common.sh@10 -- # set +x 00:26:48.108 13:11:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.108 13:11:28 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:48.108 13:11:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.108 13:11:28 -- common/autotest_common.sh@10 -- # set +x 00:26:48.674 13:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.674 13:11:29 -- target/abort_qd_sizes.sh@62 -- # killprocess 103092 00:26:48.674 13:11:29 -- common/autotest_common.sh@936 -- # '[' -z 103092 ']' 00:26:48.674 13:11:29 -- common/autotest_common.sh@940 -- # kill -0 103092 00:26:48.674 13:11:29 -- common/autotest_common.sh@941 -- # uname 00:26:48.674 13:11:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:48.674 13:11:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103092 00:26:48.674 13:11:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:48.674 13:11:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:48.674 killing process with pid 103092 00:26:48.674 13:11:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103092' 00:26:48.674 13:11:29 -- common/autotest_common.sh@955 -- # kill 103092 00:26:48.674 13:11:29 -- common/autotest_common.sh@960 -- # wait 103092 00:26:48.933 00:26:48.933 real 0m10.711s 00:26:48.933 user 0m44.016s 00:26:48.933 sys 0m1.714s 00:26:48.933 13:11:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:48.933 13:11:29 -- common/autotest_common.sh@10 -- # set +x 00:26:48.933 ************************************ 00:26:48.933 END TEST spdk_target_abort 00:26:48.933 ************************************ 00:26:48.933 13:11:29 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:48.933 13:11:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:48.933 13:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:48.933 13:11:29 -- common/autotest_common.sh@10 -- # set +x 00:26:48.933 ************************************ 00:26:48.933 START TEST kernel_target_abort 00:26:48.933 ************************************ 00:26:48.933 13:11:29 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:48.933 13:11:29 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:48.933 13:11:29 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:48.933 13:11:29 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:48.933 13:11:29 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:48.933 13:11:29 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:48.933 13:11:29 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:48.933 13:11:29 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:48.933 13:11:29 -- nvmf/common.sh@627 -- # local block nvme 00:26:48.933 13:11:29 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:48.933 13:11:29 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:48.933 13:11:29 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:48.933 13:11:29 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:49.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:49.500 Waiting for block devices as requested 00:26:49.500 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:49.500 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:49.500 13:11:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:49.500 13:11:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:49.500 13:11:30 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:49.500 13:11:30 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:49.500 13:11:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:49.500 No valid GPT data, bailing 00:26:49.500 13:11:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:49.758 13:11:30 -- scripts/common.sh@393 -- # pt= 00:26:49.758 13:11:30 -- scripts/common.sh@394 -- # return 1 00:26:49.758 13:11:30 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:49.758 13:11:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:49.758 13:11:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:49.758 13:11:30 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:49.758 13:11:30 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:49.758 13:11:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:49.758 No valid GPT data, bailing 00:26:49.758 13:11:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:49.758 13:11:30 -- scripts/common.sh@393 -- # pt= 00:26:49.758 13:11:30 -- scripts/common.sh@394 -- # return 1 00:26:49.758 13:11:30 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:49.758 13:11:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:49.758 13:11:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:49.758 13:11:30 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:49.758 13:11:30 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:49.758 13:11:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:49.758 No valid GPT data, bailing 00:26:49.758 13:11:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:49.758 13:11:30 -- scripts/common.sh@393 -- # pt= 00:26:49.758 13:11:30 -- scripts/common.sh@394 -- # return 1 00:26:49.758 13:11:30 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:49.758 13:11:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:49.758 13:11:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:49.758 13:11:30 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:49.758 13:11:30 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:49.758 13:11:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:49.758 No valid GPT data, bailing 00:26:49.758 13:11:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:49.758 13:11:30 -- scripts/common.sh@393 -- # pt= 00:26:49.758 13:11:30 -- scripts/common.sh@394 -- # return 1 00:26:49.758 13:11:30 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:49.758 13:11:30 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:49.758 13:11:30 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:49.758 13:11:30 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:49.758 13:11:30 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:49.758 13:11:30 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:49.758 13:11:30 -- nvmf/common.sh@654 -- # echo 1 00:26:49.758 13:11:30 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:49.758 13:11:30 -- nvmf/common.sh@656 -- # echo 1 00:26:49.758 13:11:30 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:49.758 13:11:30 -- nvmf/common.sh@663 -- # echo tcp 00:26:49.758 13:11:30 -- nvmf/common.sh@664 -- # echo 4420 00:26:49.758 13:11:30 -- nvmf/common.sh@665 -- # echo ipv4 00:26:49.758 13:11:30 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:49.758 13:11:30 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bbff34b2-04c8-46f0-b010-522cecaddf29 --hostid=bbff34b2-04c8-46f0-b010-522cecaddf29 -a 10.0.0.1 -t tcp -s 4420 00:26:49.758 00:26:49.758 Discovery Log Number of Records 2, Generation counter 2 00:26:49.758 =====Discovery Log Entry 0====== 00:26:49.758 trtype: tcp 00:26:49.758 adrfam: ipv4 00:26:49.758 subtype: current discovery subsystem 00:26:49.758 treq: not specified, sq flow control disable supported 00:26:49.758 portid: 1 00:26:49.758 trsvcid: 4420 00:26:49.758 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:49.758 traddr: 10.0.0.1 00:26:49.758 eflags: none 00:26:49.758 sectype: none 00:26:49.758 =====Discovery Log Entry 1====== 00:26:49.758 trtype: tcp 00:26:49.758 adrfam: ipv4 00:26:49.758 subtype: nvme subsystem 00:26:49.758 treq: not specified, sq flow control disable supported 00:26:49.758 portid: 1 00:26:49.758 trsvcid: 4420 00:26:49.758 subnqn: kernel_target 00:26:49.758 traddr: 10.0.0.1 00:26:49.758 eflags: none 00:26:49.758 sectype: none 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.758 13:11:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:49.759 13:11:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.759 13:11:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:49.759 13:11:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:49.759 13:11:30 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:53.045 Initializing NVMe Controllers 00:26:53.045 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:53.045 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:53.045 Initialization complete. Launching workers. 00:26:53.045 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31082, failed: 0 00:26:53.045 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31082, failed to submit 0 00:26:53.045 success 0, unsuccess 31082, failed 0 00:26:53.045 13:11:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:53.045 13:11:33 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:56.333 Initializing NVMe Controllers 00:26:56.333 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:56.333 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:56.333 Initialization complete. Launching workers. 00:26:56.333 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66388, failed: 0 00:26:56.333 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27053, failed to submit 39335 00:26:56.333 success 0, unsuccess 27053, failed 0 00:26:56.333 13:11:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:56.333 13:11:36 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:59.643 Initializing NVMe Controllers 00:26:59.643 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:59.643 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:59.643 Initialization complete. Launching workers. 00:26:59.643 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 72236, failed: 0 00:26:59.643 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18002, failed to submit 54234 00:26:59.643 success 0, unsuccess 18002, failed 0 00:26:59.643 13:11:40 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:59.643 13:11:40 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:59.643 13:11:40 -- nvmf/common.sh@677 -- # echo 0 00:26:59.643 13:11:40 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:59.643 13:11:40 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:59.643 13:11:40 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:59.643 13:11:40 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:59.643 13:11:40 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:59.643 13:11:40 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:59.643 00:26:59.643 real 0m10.426s 00:26:59.643 user 0m5.210s 00:26:59.643 sys 0m2.501s 00:26:59.643 13:11:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:59.643 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:26:59.643 ************************************ 00:26:59.643 END TEST kernel_target_abort 00:26:59.643 ************************************ 00:26:59.643 13:11:40 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:59.643 13:11:40 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:59.643 13:11:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:59.643 13:11:40 -- nvmf/common.sh@116 -- # sync 00:26:59.643 13:11:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:59.643 13:11:40 -- nvmf/common.sh@119 -- # set +e 00:26:59.643 13:11:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:59.643 13:11:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:59.643 rmmod nvme_tcp 00:26:59.643 rmmod nvme_fabrics 00:26:59.643 rmmod nvme_keyring 00:26:59.643 13:11:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:59.643 13:11:40 -- nvmf/common.sh@123 -- # set -e 00:26:59.643 13:11:40 -- nvmf/common.sh@124 -- # return 0 00:26:59.643 13:11:40 -- nvmf/common.sh@477 -- # '[' -n 103092 ']' 00:26:59.643 13:11:40 -- nvmf/common.sh@478 -- # killprocess 103092 00:26:59.643 13:11:40 -- common/autotest_common.sh@936 -- # '[' -z 103092 ']' 00:26:59.643 13:11:40 -- common/autotest_common.sh@940 -- # kill -0 103092 00:26:59.643 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103092) - No such process 00:26:59.643 Process with pid 103092 is not found 00:26:59.643 13:11:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103092 is not found' 00:26:59.643 13:11:40 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:59.643 13:11:40 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:00.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:00.207 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:00.207 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:00.207 13:11:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:00.207 13:11:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:00.207 13:11:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.207 13:11:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:00.207 13:11:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.207 13:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:00.207 13:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.207 13:11:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:00.465 00:27:00.465 real 0m24.676s 00:27:00.465 user 0m50.677s 00:27:00.465 sys 0m5.531s 00:27:00.465 13:11:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:00.465 ************************************ 00:27:00.465 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:27:00.465 END TEST nvmf_abort_qd_sizes 00:27:00.465 ************************************ 00:27:00.465 13:11:41 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:00.465 13:11:41 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:00.466 13:11:41 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:00.466 13:11:41 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:00.466 13:11:41 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:00.466 13:11:41 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:00.466 13:11:41 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:00.466 13:11:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:00.466 13:11:41 -- common/autotest_common.sh@10 -- # set +x 00:27:00.466 13:11:41 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:00.466 13:11:41 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:00.466 13:11:41 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:00.466 13:11:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.368 INFO: APP EXITING 00:27:02.368 INFO: killing all VMs 00:27:02.368 INFO: killing vhost app 00:27:02.368 INFO: EXIT DONE 00:27:02.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:02.627 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:02.627 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:03.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:03.562 Cleaning 00:27:03.562 Removing: /var/run/dpdk/spdk0/config 00:27:03.562 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:03.562 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:03.562 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:03.562 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:03.562 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:03.562 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:03.562 Removing: /var/run/dpdk/spdk1/config 00:27:03.562 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:03.562 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:03.562 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:03.562 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:03.562 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:03.562 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:03.562 Removing: /var/run/dpdk/spdk2/config 00:27:03.562 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:03.562 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:03.562 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:03.562 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:03.562 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:03.562 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:03.562 Removing: /var/run/dpdk/spdk3/config 00:27:03.562 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:03.562 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:03.562 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:03.562 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:03.562 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:03.562 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:03.562 Removing: /var/run/dpdk/spdk4/config 00:27:03.562 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:03.562 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:03.562 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:03.562 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:03.563 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:03.563 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:03.563 Removing: /dev/shm/nvmf_trace.0 00:27:03.563 Removing: /dev/shm/spdk_tgt_trace.pid67250 00:27:03.563 Removing: /var/run/dpdk/spdk0 00:27:03.563 Removing: /var/run/dpdk/spdk1 00:27:03.563 Removing: /var/run/dpdk/spdk2 00:27:03.563 Removing: /var/run/dpdk/spdk3 00:27:03.563 Removing: /var/run/dpdk/spdk4 00:27:03.563 Removing: /var/run/dpdk/spdk_pid100063 00:27:03.563 Removing: /var/run/dpdk/spdk_pid100264 00:27:03.563 Removing: /var/run/dpdk/spdk_pid100556 00:27:03.563 Removing: /var/run/dpdk/spdk_pid100863 00:27:03.563 Removing: /var/run/dpdk/spdk_pid101411 00:27:03.563 Removing: /var/run/dpdk/spdk_pid101416 00:27:03.563 Removing: /var/run/dpdk/spdk_pid101789 00:27:03.563 Removing: /var/run/dpdk/spdk_pid101949 00:27:03.563 Removing: /var/run/dpdk/spdk_pid102110 00:27:03.563 Removing: /var/run/dpdk/spdk_pid102207 00:27:03.563 Removing: /var/run/dpdk/spdk_pid102363 00:27:03.563 Removing: /var/run/dpdk/spdk_pid102472 00:27:03.563 Removing: /var/run/dpdk/spdk_pid103161 00:27:03.563 Removing: /var/run/dpdk/spdk_pid103191 00:27:03.563 Removing: /var/run/dpdk/spdk_pid103226 00:27:03.563 Removing: /var/run/dpdk/spdk_pid103475 00:27:03.563 Removing: /var/run/dpdk/spdk_pid103511 00:27:03.563 Removing: /var/run/dpdk/spdk_pid103542 00:27:03.563 Removing: /var/run/dpdk/spdk_pid67098 00:27:03.563 Removing: /var/run/dpdk/spdk_pid67250 00:27:03.563 Removing: /var/run/dpdk/spdk_pid67571 00:27:03.563 Removing: /var/run/dpdk/spdk_pid67846 00:27:03.563 Removing: /var/run/dpdk/spdk_pid68029 00:27:03.563 Removing: /var/run/dpdk/spdk_pid68107 00:27:03.563 Removing: /var/run/dpdk/spdk_pid68206 00:27:03.563 Removing: /var/run/dpdk/spdk_pid68308 00:27:03.563 Removing: /var/run/dpdk/spdk_pid68341 00:27:03.563 Removing: /var/run/dpdk/spdk_pid68371 00:27:03.563 Removing: /var/run/dpdk/spdk_pid68445 00:27:03.563 Removing: /var/run/dpdk/spdk_pid68544 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69175 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69238 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69303 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69331 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69410 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69438 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69517 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69545 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69596 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69626 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69678 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69708 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69861 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69897 00:27:03.563 Removing: /var/run/dpdk/spdk_pid69973 00:27:03.563 Removing: /var/run/dpdk/spdk_pid70048 00:27:03.563 Removing: /var/run/dpdk/spdk_pid70067 00:27:03.563 Removing: /var/run/dpdk/spdk_pid70131 00:27:03.563 Removing: /var/run/dpdk/spdk_pid70145 00:27:03.563 Removing: /var/run/dpdk/spdk_pid70185 00:27:03.563 Removing: /var/run/dpdk/spdk_pid70199 00:27:03.563 Removing: /var/run/dpdk/spdk_pid70228 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70255 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70284 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70304 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70338 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70352 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70387 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70406 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70441 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70455 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70489 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70511 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70544 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70560 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70595 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70613 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70649 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70663 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70697 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70717 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70746 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70771 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70803 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70817 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70857 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70871 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70900 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70925 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70954 00:27:03.822 Removing: /var/run/dpdk/spdk_pid70977 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71014 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71031 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71073 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71088 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71123 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71142 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71178 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71249 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71354 00:27:03.822 Removing: /var/run/dpdk/spdk_pid71786 00:27:03.822 Removing: /var/run/dpdk/spdk_pid78736 00:27:03.822 Removing: /var/run/dpdk/spdk_pid79087 00:27:03.822 Removing: /var/run/dpdk/spdk_pid81515 00:27:03.822 Removing: /var/run/dpdk/spdk_pid81898 00:27:03.822 Removing: /var/run/dpdk/spdk_pid82173 00:27:03.822 Removing: /var/run/dpdk/spdk_pid82219 00:27:03.822 Removing: /var/run/dpdk/spdk_pid82530 00:27:03.822 Removing: /var/run/dpdk/spdk_pid82586 00:27:03.822 Removing: /var/run/dpdk/spdk_pid82965 00:27:03.822 Removing: /var/run/dpdk/spdk_pid83492 00:27:03.822 Removing: /var/run/dpdk/spdk_pid83922 00:27:03.822 Removing: /var/run/dpdk/spdk_pid84895 00:27:03.822 Removing: /var/run/dpdk/spdk_pid85890 00:27:03.822 Removing: /var/run/dpdk/spdk_pid86006 00:27:03.822 Removing: /var/run/dpdk/spdk_pid86070 00:27:03.822 Removing: /var/run/dpdk/spdk_pid87550 00:27:03.822 Removing: /var/run/dpdk/spdk_pid87792 00:27:03.822 Removing: /var/run/dpdk/spdk_pid88218 00:27:03.822 Removing: /var/run/dpdk/spdk_pid88332 00:27:03.822 Removing: /var/run/dpdk/spdk_pid88484 00:27:03.822 Removing: /var/run/dpdk/spdk_pid88531 00:27:03.822 Removing: /var/run/dpdk/spdk_pid88577 00:27:03.822 Removing: /var/run/dpdk/spdk_pid88618 00:27:03.822 Removing: /var/run/dpdk/spdk_pid88781 00:27:03.822 Removing: /var/run/dpdk/spdk_pid88928 00:27:03.822 Removing: /var/run/dpdk/spdk_pid89192 00:27:03.822 Removing: /var/run/dpdk/spdk_pid89315 00:27:03.822 Removing: /var/run/dpdk/spdk_pid89742 00:27:03.822 Removing: /var/run/dpdk/spdk_pid90121 00:27:03.822 Removing: /var/run/dpdk/spdk_pid90128 00:27:03.822 Removing: /var/run/dpdk/spdk_pid92381 00:27:03.822 Removing: /var/run/dpdk/spdk_pid92697 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93215 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93222 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93573 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93587 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93607 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93634 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93639 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93788 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93790 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93898 00:27:03.822 Removing: /var/run/dpdk/spdk_pid93906 00:27:03.822 Removing: /var/run/dpdk/spdk_pid94014 00:27:03.822 Removing: /var/run/dpdk/spdk_pid94016 00:27:03.822 Removing: /var/run/dpdk/spdk_pid94494 00:27:03.822 Removing: /var/run/dpdk/spdk_pid94540 00:27:03.822 Removing: /var/run/dpdk/spdk_pid94693 00:27:03.822 Removing: /var/run/dpdk/spdk_pid94815 00:27:04.081 Removing: /var/run/dpdk/spdk_pid95221 00:27:04.081 Removing: /var/run/dpdk/spdk_pid95467 00:27:04.081 Removing: /var/run/dpdk/spdk_pid95973 00:27:04.081 Removing: /var/run/dpdk/spdk_pid96541 00:27:04.081 Removing: /var/run/dpdk/spdk_pid97005 00:27:04.081 Removing: /var/run/dpdk/spdk_pid97082 00:27:04.081 Removing: /var/run/dpdk/spdk_pid97153 00:27:04.081 Removing: /var/run/dpdk/spdk_pid97239 00:27:04.081 Removing: /var/run/dpdk/spdk_pid97383 00:27:04.081 Removing: /var/run/dpdk/spdk_pid97473 00:27:04.081 Removing: /var/run/dpdk/spdk_pid97568 00:27:04.081 Removing: /var/run/dpdk/spdk_pid97654 00:27:04.081 Removing: /var/run/dpdk/spdk_pid97994 00:27:04.081 Removing: /var/run/dpdk/spdk_pid98694 00:27:04.081 Clean 00:27:04.081 killing process with pid 61509 00:27:04.081 killing process with pid 61510 00:27:04.081 13:11:44 -- common/autotest_common.sh@1446 -- # return 0 00:27:04.081 13:11:44 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:04.081 13:11:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.081 13:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:04.081 13:11:44 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:04.081 13:11:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.081 13:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:04.081 13:11:44 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:04.081 13:11:44 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:04.081 13:11:44 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:04.339 13:11:44 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:04.339 13:11:44 -- spdk/autotest.sh@383 -- # hostname 00:27:04.339 13:11:44 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:04.339 geninfo: WARNING: invalid characters removed from testname! 00:27:26.274 13:12:06 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:28.806 13:12:09 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:31.338 13:12:12 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:33.872 13:12:14 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:35.775 13:12:16 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:38.306 13:12:18 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:40.208 13:12:20 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:40.467 13:12:21 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:40.467 13:12:21 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:40.467 13:12:21 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:40.467 13:12:21 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:40.467 13:12:21 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:40.467 13:12:21 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:40.467 13:12:21 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:40.467 13:12:21 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:40.467 13:12:21 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:40.467 13:12:21 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:40.467 13:12:21 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:40.467 13:12:21 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:40.467 13:12:21 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:40.467 13:12:21 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:40.467 13:12:21 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:40.467 13:12:21 -- scripts/common.sh@343 -- $ case "$op" in 00:27:40.467 13:12:21 -- scripts/common.sh@344 -- $ : 1 00:27:40.467 13:12:21 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:40.467 13:12:21 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.467 13:12:21 -- scripts/common.sh@364 -- $ decimal 1 00:27:40.467 13:12:21 -- scripts/common.sh@352 -- $ local d=1 00:27:40.467 13:12:21 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:40.467 13:12:21 -- scripts/common.sh@354 -- $ echo 1 00:27:40.467 13:12:21 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:40.467 13:12:21 -- scripts/common.sh@365 -- $ decimal 2 00:27:40.467 13:12:21 -- scripts/common.sh@352 -- $ local d=2 00:27:40.467 13:12:21 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:40.467 13:12:21 -- scripts/common.sh@354 -- $ echo 2 00:27:40.467 13:12:21 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:40.467 13:12:21 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:40.467 13:12:21 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:40.467 13:12:21 -- scripts/common.sh@367 -- $ return 0 00:27:40.467 13:12:21 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.467 13:12:21 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:40.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.467 --rc genhtml_branch_coverage=1 00:27:40.467 --rc genhtml_function_coverage=1 00:27:40.467 --rc genhtml_legend=1 00:27:40.467 --rc geninfo_all_blocks=1 00:27:40.467 --rc geninfo_unexecuted_blocks=1 00:27:40.467 00:27:40.467 ' 00:27:40.467 13:12:21 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:40.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.467 --rc genhtml_branch_coverage=1 00:27:40.467 --rc genhtml_function_coverage=1 00:27:40.467 --rc genhtml_legend=1 00:27:40.467 --rc geninfo_all_blocks=1 00:27:40.467 --rc geninfo_unexecuted_blocks=1 00:27:40.467 00:27:40.467 ' 00:27:40.467 13:12:21 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:40.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.467 --rc genhtml_branch_coverage=1 00:27:40.467 --rc genhtml_function_coverage=1 00:27:40.467 --rc genhtml_legend=1 00:27:40.467 --rc geninfo_all_blocks=1 00:27:40.467 --rc geninfo_unexecuted_blocks=1 00:27:40.467 00:27:40.467 ' 00:27:40.467 13:12:21 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:40.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.467 --rc genhtml_branch_coverage=1 00:27:40.467 --rc genhtml_function_coverage=1 00:27:40.467 --rc genhtml_legend=1 00:27:40.467 --rc geninfo_all_blocks=1 00:27:40.467 --rc geninfo_unexecuted_blocks=1 00:27:40.467 00:27:40.468 ' 00:27:40.468 13:12:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:40.468 13:12:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:40.468 13:12:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.468 13:12:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.468 13:12:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.468 13:12:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.468 13:12:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.468 13:12:21 -- paths/export.sh@5 -- $ export PATH 00:27:40.468 13:12:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.468 13:12:21 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:40.468 13:12:21 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:40.468 13:12:21 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734095541.XXXXXX 00:27:40.468 13:12:21 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734095541.umRyRd 00:27:40.468 13:12:21 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:40.468 13:12:21 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:27:40.468 13:12:21 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:40.468 13:12:21 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:40.468 13:12:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:40.468 13:12:21 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:40.468 13:12:21 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:40.468 13:12:21 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:40.468 13:12:21 -- common/autotest_common.sh@10 -- $ set +x 00:27:40.468 13:12:21 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:40.468 13:12:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:40.468 13:12:21 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:40.468 13:12:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:40.468 13:12:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:40.468 13:12:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:40.468 13:12:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:40.468 13:12:21 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:40.468 13:12:21 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:40.468 13:12:21 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:40.468 13:12:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:40.468 + [[ -n 5964 ]] 00:27:40.468 + sudo kill 5964 00:27:40.736 [Pipeline] } 00:27:40.751 [Pipeline] // timeout 00:27:40.757 [Pipeline] } 00:27:40.771 [Pipeline] // stage 00:27:40.776 [Pipeline] } 00:27:40.789 [Pipeline] // catchError 00:27:40.798 [Pipeline] stage 00:27:40.800 [Pipeline] { (Stop VM) 00:27:40.812 [Pipeline] sh 00:27:41.097 + vagrant halt 00:27:44.399 ==> default: Halting domain... 00:27:49.719 [Pipeline] sh 00:27:49.997 + vagrant destroy -f 00:27:52.529 ==> default: Removing domain... 00:27:52.799 [Pipeline] sh 00:27:53.079 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:53.088 [Pipeline] } 00:27:53.101 [Pipeline] // stage 00:27:53.106 [Pipeline] } 00:27:53.116 [Pipeline] // dir 00:27:53.121 [Pipeline] } 00:27:53.132 [Pipeline] // wrap 00:27:53.139 [Pipeline] } 00:27:53.151 [Pipeline] // catchError 00:27:53.159 [Pipeline] stage 00:27:53.161 [Pipeline] { (Epilogue) 00:27:53.174 [Pipeline] sh 00:27:53.455 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:58.744 [Pipeline] catchError 00:27:58.746 [Pipeline] { 00:27:58.759 [Pipeline] sh 00:27:59.039 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:59.298 Artifacts sizes are good 00:27:59.306 [Pipeline] } 00:27:59.320 [Pipeline] // catchError 00:27:59.330 [Pipeline] archiveArtifacts 00:27:59.336 Archiving artifacts 00:27:59.452 [Pipeline] cleanWs 00:27:59.463 [WS-CLEANUP] Deleting project workspace... 00:27:59.463 [WS-CLEANUP] Deferred wipeout is used... 00:27:59.468 [WS-CLEANUP] done 00:27:59.470 [Pipeline] } 00:27:59.484 [Pipeline] // stage 00:27:59.489 [Pipeline] } 00:27:59.502 [Pipeline] // node 00:27:59.507 [Pipeline] End of Pipeline 00:27:59.544 Finished: SUCCESS